• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Accurate and efficient localisation in wireless sensor networks using a best-reference selection

Abu-Mahfouz, Adnan Mohammed 12 October 2011 (has links)
Many wireless sensor network (WSN) applications depend on knowing the position of nodes within the network if they are to function efficiently. Location information is used, for example, in item tracking, routing protocols and controlling node density. Configuring each node with its position manually is cumbersome, and not feasible in networks with mobile nodes or dynamic topologies. WSNs, therefore, rely on localisation algorithms for the sensor nodes to determine their own physical location. The basis of several localisation algorithms is the theory that the higher the number of reference nodes (called “references”) used, the greater the accuracy of the estimated position. However, this approach makes computation more complex and increases the likelihood that the location estimation may be inaccurate. Such inaccuracy in estimation could be due to including data from nodes with a large measurement error, or from nodes that intentionally aim to undermine the localisation process. This approach also has limited success in networks with sparse references, or where data cannot always be collected from many references (due for example to communication obstructions or bandwidth limitations). These situations require a method for achieving reliable and accurate localisation using a limited number of references. Designing a localisation algorithm that could estimate node position with high accuracy using a low number of references is not a trivial problem. As the number of references decreases, more statistical weight is attached to each reference’s location estimate. The overall localisation accuracy therefore greatly depends on the robustness of the selection method that is used to eliminate inaccurate references. Various localisation algorithms and their performance in WSNs were studied. Information-fusion theory was also investigated and a new technique, rooted in information-fusion theory, was proposed for defining the best criteria for the selection of references. The researcher chose selection criteria to identify only those references that would increase the overall localisation accuracy. Using these criteria also minimises the number of iterations needed to refine the accuracy of the estimated position. This reduces bandwidth requirements and the time required for a position estimation after any topology change (or even after initial network deployment). The resultant algorithm achieved two main goals simultaneously: accurate location discovery and information fusion. Moreover, the algorithm fulfils several secondary design objectives: self-organising nature, simplicity, robustness, localised processing and security. The proposed method was implemented and evaluated using a commercial network simulator. This evaluation of the proposed algorithm’s performance demonstrated that it is superior to other localisation algorithms evaluated; using fewer references, the algorithm performed better in terms of accuracy, robustness, security and energy efficiency. These results confirm that the proposed selection method and associated localisation algorithm allow for reliable and accurate location information to be gathered using a minimum number of references. This decreases the computational burden of gathering and analysing location data from the high number of references previously believed to be necessary. / Thesis (PhD(Eng))--University of Pretoria, 2011. / Electrical, Electronic and Computer Engineering / unrestricted
2

Enhancing security in distributed systems with trusted computing hardware

Reid, Jason Frederick January 2007 (has links)
The need to increase the hostile attack resilience of distributed and internet-worked computer systems is critical and pressing. This thesis contributes to concrete improvements in distributed systems trustworthiness through an enhanced understanding of a technical approach known as trusted computing hardware. Because of its physical and logical protection features, trusted computing hardware can reliably enforce a security policy in a threat model where the authorised user is untrusted or when the device is placed in a hostile environment. We present a critical analysis of vulnerabilities in current systems, and argue that current industry-driven trusted computing initiatives will fail in efforts to retrofit security into inherently flawed operating system designs, since there is no substitute for a sound protection architecture grounded in hardware-enforced domain isolation. In doing so we identify the limitations of hardware-based approaches. We argue that the current emphasis of these programs does not give sufficient weight to the role that operating system security plays in overall system security. New processor features that provide hardware support for virtualisation will contribute more to practical security improvement because they will allow multiple operating systems to concurrently share the same processor. New operating systems that implement a sound protection architecture will thus be able to be introduced to support applications with stringent security requirements. These can coexist alongside inherently less secure mainstream operating systems, allowing a gradual migration to less vulnerable alternatives. We examine the effectiveness of the ITSEC and Common Criteria evaluation and certification schemes as a basis for establishing assurance in trusted computing hardware. Based on a survey of smart card certifications, we contend that the practice of artificially limiting the scope of an evaluation in order to gain a higher assurance rating is quite common. Due to a general lack of understanding in the marketplace as to how the schemes work, high evaluation assurance levels are confused with a general notion of 'high security strength'. Vendors invest little effort in correcting the misconception since they benefit from it and this has arguably undermined the value of the whole certification process. We contribute practical techniques for securing personal trusted hardware devices against a type of attack known as a relay attack. Our method is based on a novel application of a phenomenon known as side channel leakage, heretofore considered exclusively as a security vulnerability. We exploit the low latency of side channel information transfer to deliver a communication channel with timing resolution that is fine enough to detect sophisticated relay attacks. We avoid the cost and complexity associated with alternative communication techniques suggested in previous proposals. We also propose the first terrorist attack resistant distance bounding protocol that is efficient enough to be implemented on resource constrained devices. We propose a design for a privacy sensitive electronic cash scheme that leverages the confidentiality and integrity protection features of trusted computing hardware. We specify the command set and message structures and implement these in a prototype that uses Dallas Semiconductor iButtons. We consider the access control requirements for a national scale electronic health records system of the type that Australia is currently developing. We argue that an access control model capable of supporting explicit denial of privileges is required to ensure that consumers maintain their right to grant or withhold consent to disclosure of their sensitive health information in an electronic system. Finding this feature absent in standard role-based access control models, we propose a modification to role-based access control that supports policy constructs of this type. Explicit denial is difficult to enforce in a large scale system without an active central authority but centralisation impacts negatively on system scalability. We show how the unique properties of trusted computing hardware can address this problem. We outline a conceptual architecture for an electronic health records access control system that leverages hardware level CPU virtualisation, trusted platform modules, personal cryptographic tokens and secure coprocessors to implement role based cryptographic access control. We argue that the design delivers important scalability benefits because it enables access control decisions to be made and enforced locally on a user's computing platform in a reliable way.
3

Sécurisation d'un lien radio UWB-IR / Security of an UWB-IR Link

Benfarah, Ahmed 10 July 2013 (has links)
Du fait de la nature ouverte et partagée du canal radio, les communications sans fil souffrent de vulnérabilités sérieuses en terme de sécurité. Dans ces travaux de thèse, je me suis intéressé particulièrement à deux classes d’attaques à savoir l’attaque par relais et l’attaque par déni de service (brouillage). La technologie de couche physique UWB-IR a connu un grand essor au cours de cette dernière décennie et elle est une candidate intéressante pour les réseaux sans fil à courte portée. Mon objectif principal était d’exploiter les caractéristiques de la couche physique UWB-IR afin de renforcer la sécurité des communications sans fil. L’attaque par relais peut mettre à défaut les protocoles cryptographiques d’authentification. Pour remédier à cette menace, les protocoles de distance bounding ont été proposés. Dans ce cadre, je propose deux nouveaux protocoles (STHCP : Secret Time-Hopping Code Protocol et SMCP : Secret Mapping Code Protocol) qui améliorent considérablement la sécurité des protocoles de distance bounding au moyen des paramètres de la radio UWB-IR. Le brouillage consiste en l’émission intentionnelle d’un signal sur le canal lors du déroulement d’une communication. Mes contributions concernant le problème de brouillage sont triples. D’abord, j’ai déterminé les paramètres d’un brouilleur gaussien pire cas contre un récepteur UWB-IR non-cohérent. En second lieu, je propose un nouveau modèle de brouillage par analogie avec les attaques contre le système de chiffrement. Troisièmement, je propose une modification rendant la radio UWB-IR plus robuste au brouillage. Enfin, dans une dernière partie de mes travaux, je me suis intéressé au problème d’intégrer la sécurité à un réseau UWB-IR en suivant l’approche d’embedding. Le principe de cette approche consiste à superposer et à transmettre les informations de sécurité simultanément avec les données et avec une contrainte de compatibilité. Ainsi, je propose deux nouvelles techniques d’embedding pour la couche physique UWB-IR afin d’intégrer un service d’authentification. / Due to the shared nature of wireless medium, wireless communications are more vulnerable to security threats. In my PhD work, I focused on two types of threats: relay attacks and jamming. UWB-IR physical layer technology has seen a great development during the last decade which makes it a promising candidate for short range wireless communications. My main goal was to exploit UWB-IR physical layer characteristics in order to reinforce security of wireless communications. By the simple way of signal relaying, the adversary can defeat wireless authentication protocols. The first countermeasure proposed to thwart these relay attacks was distance bounding protocol. The concept of distance bounding relies on the combination of two sides: an authentication cryptographic side and a distance checking side. In this context, I propose two new distance bounding protocols that significantly improve the security of existing distance bounding protocols by means of UWB-IR physical layer parameters. The first protocol called STHCP is based on using secret time-hopping codes. Whereas, the second called SMCP is based on secret mapping codes. Security analysis and comparison to the state of the art highlight various figures of merit of my proposition. Jamming consists in the emission of noise over the channel while communication is taking place and constitutes a major problem to the security of wireless communications. In a first contribution, I have determined worst case Gaussian noise parameters (central frequency and bandwidth) against UWB-IR communication employing PPM modulation and a non-coherent receiver. The metric considered for jammer optimization is the signal-to-jamming ratio at the output of the receiver. In a second contribution, I propose a new jamming model by analogy to attacks against ciphering algorithms. The new model leads to distinguish various jamming scenarios ranging from the best case to the worst case. Moreover, I propose a modification of the UWB-IR physical layer which allows to restrict any jamming problem to the most favorable scenario. The modification is based on using a cryptographic modulation depending on a stream cipher. The new radio has the advantage to combine the resistance to jamming and the protection from eavesdropping. Finally, I focused on the problem of security embedding on an existing UWB-IR network. Security embedding consists in adding security features directly at the physical layer and sending them concurrently with data. The embedding mechanism should satisfy a compatibility concern to existing receivers in the network. I propose two new embedding techniques which rely on the superposition of a pulse orthogonal to the original pulse by the form or by the position. Performances analysis reveal that both embedding techniques satisfy all system design constraints.

Page generated in 0.0719 seconds