• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 35
  • 35
  • 13
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficient Cryptographic Algorithms and Protocols for Mobile Ad Hoc Networks

Fan, Xinxin 12 April 2010 (has links)
As the next evolutionary step in digital communication systems, mobile ad hoc networks (MANETs) and their specialization like wireless sensor networks (WSNs) have been attracting much interest in both research and industry communities. In MANETs, network nodes can come together and form a network without depending on any pre-existing infrastructure and human intervention. Unfortunately, the salient characteristics of MANETs, in particular the absence of infrastructure and the constrained resources of mobile devices, present enormous challenges when designing security mechanisms in this environment. Without necessary measures, wireless communications are easy to be intercepted and activities of users can be easily traced. This thesis presents our solutions for two important aspects of securing MANETs, namely efficient key management protocols and fast implementations of cryptographic primitives on constrained devices. Due to the tight cost and constrained resources of high-volume mobile devices used in MANETs, it is desirable to employ lightweight and specialized cryptographic primitives for many security applications. Motivated by the design of the well-known Enigma machine, we present a novel ultra-lightweight cryptographic algorithm, referred to as Hummingbird, for resource-constrained devices. Hummingbird can provide the designed security with small block size and is resistant to the most common attacks such as linear and differential cryptanalysis. Furthermore, we also present efficient software implementations of Hummingbird on 4-, 8- and 16-bit microcontrollers from Atmel and Texas Instruments as well as efficient hardware implementations on the low-cost field programmable gate arrays (FPGAs) from Xilinx, respectively. Our experimental results show that after a system initialization phase Hummingbird can achieve up to 147 and 4.7 times faster throughput for a size-optimized and a speed-optimized software implementation, respectively, when compared to the state-of-the-art ultra-lightweight block cipher PRESENT on the similar platforms. In addition, the speed optimized Hummingbird encryption core can achieve a throughput of 160.4 Mbps and the area optimized encryption core only occupies 253 slices on a Spartan-3 XC3S200 FPGA device. Bilinear pairings on the Jacobians of (hyper-)elliptic curves have received considerable attention as a building block for constructing cryptographic schemes in MANETs with new and novel properties. Motivated by the work of Scott, we investigate how to use efficiently computable automorphisms to speed up pairing computations on two families of non-supersingular genus 2 hyperelliptic curves over prime fields. Our findings lead to new variants of Miller's algorithm in which the length of the main loop can be up to 4 times shorter than that of the original Miller's algorithm in the best case. We also generalize Chatterjee et al.'s idea of encapsulating the computation of the line function with the group operations to genus 2 hyperelliptic curves, and derive new explicit formulae for the group operations in projective and new coordinates in the context of pairing computations. Efficient software implementation of computing the Tate pairing on both a supersingular and a non-supersingular genus 2 curve with the same embedding degree of k = 4 is investigated. Combining the new algorithm with known optimization techniques, we show that pairing computations on non-supersingular genus 2 curves over prime fields use up to 55.8% fewer field operations and run about 10% faster than supersingular genus 2 curves for the same security level. As an important part of a key management mechanism, efficient key revocation protocol, which revokes the cryptographic keys of malicious nodes and isolates them from the network, is crucial for the security and robustness of MANETs. We propose a novel self-organized key revocation scheme for MANETs based on the Dirichlet multinomial model and identity-based cryptography. Firmly rooted in statistics, our key revocation scheme provides a theoretically sound basis for nodes analyzing and predicting peers' behavior based on their own observations and other nodes' reports. Considering the difference of malicious behaviors, we proposed to classify the nodes' behavior into three categories, namely good behavior, suspicious behavior and malicious behavior. Each node in the network keeps track of three categories of behavior and updates its knowledge about other nodes' behavior with 3-dimension Dirichlet distribution. Based on its own analysis, each node is able to protect itself from malicious attacks by either revoking the keys of the nodes with malicious behavior or ceasing the communication with the nodes showing suspicious behavior for some time. The attack-resistant properties of the resulting scheme against false accusation attacks launched by independent and collusive adversaries are also analyzed through extensive simulations. In WSNs, broadcast authentication is a crucial security mechanism that allows a multitude of legitimate users to join in and disseminate messages into the networks in a dynamic and authenticated way. During the past few years, several public-key based multi-user broadcast authentication schemes have been proposed in the literature to achieve immediate authentication and to address the security vulnerability intrinsic to μTESLA-like schemes. Unfortunately, the relatively slow signature verification in signature-based broadcast authentication has also incurred a series of problems such as high energy consumption and long verification delay. We propose an efficient technique to accelerate the signature verification in WSNs through the cooperation among sensor nodes. By allowing some sensor nodes to release the intermediate computation results to their neighbors during the signature verification, a large number of sensor nodes can accelerate their signature verification process significantly. When applying our faster signature verification technique to the broadcast authentication in a 4×4 grid-based WSN, a quantitative performance analysis shows that our scheme needs 17.7%~34.5% less energy and runs about 50% faster than the traditional signature verification method.
12

A Fraud-Prevention Framework for Software Defined Radio Mobile Devices

Brawerman, Alessandro 13 July 2005 (has links)
The superior reconfigurability of software defined radio mobile devices has made it one of the most promising technology on the wireless network and in the mobile communication industry. The evolution from a static and rigid system to a highly dynamic environment, which offers many advantages over current systems, has been made possible thanks to the concepts of programmability and reconfigurability introduced by the software defined radio technology and the higher level of flexibility and openness of this technology's devices. Clearly, the software defined radio mobile device's flexibility is a great advantage since the customer is able to use the same device in different parts of the world, with different wireless technologies. Despite the advantages, there are still issues to be discussed regarding security. According to the Software Defined Radio Forum some of the concerns are the radio configuration download, storage and installation, user's privacy, and cloning. To address the SDR Forum concerns a raud-prevention framework is proposed. The framework is composed by new pieces of hardware, new modules and new protocols that together greatly enhance the overall security of software defined radio mobile devices and this new highly dynamic environment. The framework offers security monitoring against malicious attacks and viruses that may affect the configuration data; protects sensitive information through the use of protected storage; creates and protects an identity for the system; employs a secure and efficient protocol for radio configuration download and update; and finally, establishes an anti-cloning scheme, which not only guarantees that no units can be cloned over the air but also elevates the level of difficulty to clone units if the attacker has physical access to those units. Even if cloned units exist, the anti-cloning scheme is able to identify them and deny any service.
13

Verification of Security Properties Using Formal Techniques

Al-Shadly, Saleh 09 April 2013 (has links)
No description available.
14

Verification of security protocols with state in ProVerif : Avoiding false attacks when verifying freshness / Verifiering av säkerhetsprotokoll med persistenta variabler i ProVerif : Att undvika falska attacker vid verifiering av att genererade nycklar är unika

Saarinen, Pasi January 2015 (has links)
One of the issues when attempting to verify security properties of a protocol is how to model the protocol. We introduce a method for verifying event freshness in tools which use the applied π-calculus and are able to verify secrecy. Event freshness can be used to prove that a protocol never generates the same key twice. In this work we encode state in the applied π-calculus and perform bounded verification of freshness for MiniDC by using the ProVerif tool. MiniDC is a trivial protocol that for each iteration of a loop generates a unique key and outputs it to a private channel. When verifying freshness, the abstractions of ProVerif cause false attacks. We describe methods which can be used to avoid false attacks that appear when verifying freshness. We show how to avoid some false attacks introduced by private channels, state and protocols that disclose their secret. We conclude that the method used to verify freshness in MiniDCis impractical to use in more complicated protocols with state. / Ett av problemen som uppstår vid verifiering av säkerhetsprotokoll är hur protokoll ska modelleras. Vi introducerar en metod för att verifiera att skapde termer inte har använts förr. Denna metod kan användas i program som använder applicerad π-kalkyl som input och kan verifiera sekretess. I detta arbete visar vi hur protokoll med persistenta variabler kan modelleras i applicerad π-kalkyl. Vi verifierar även MiniDC för ett begränsat antal iterationer med hjälp av ProVerif. MiniDC är ett enkelt protokoll som för varje iteration av en loop skapar en nyckel och skickar den över en privat kanal. När man verifierar att skapade termer inte har använts förr så introducerar ProVerifs abstraktioner falska attacker. Vi beskriver metoder som kan användas för att undvika dessa falska attacker. Dessa metoder kan användas för falska attacker introducerade av privata kanaler, persistenta variabler eller protokoll som avslöjar sin krypteringsnyckel. Vår slutsats är att metoden som används för att verifiera MiniDC är opraktisk att använda i mer komplicerade protokoll med persistenta variabler.
15

Vérification automatique de la protection de la vie privée : entre théorie et pratique / Automated Verification of Privacy in Security Protocols : Back and Forth Between Theory & Practice

Hirschi, Lucca 21 April 2017 (has links)
La société de l’information dans laquelle nous vivons repose notamment sur notre capacité à échanger des informations de façon sécurisée. Ces échanges sécurisés sont réalisés au moyen de protocoles cryptographiques. Ils explicitent comment les différents agents communicants doivent se comporter et exploitent des primitives cryptographiques (e.g. chiffrement, signature) pour protéger les échanges. Étant donné leur prédominance et leur importance, il est crucial de s’assurer que ces protocoles accomplissent réellement leurs objectifs. Parmi ces objectifs, on trouve de plus en plus de propriétés en lien avec la vie privée (e.g. anonymat, non-traçabilité). Malheureusement, les protocoles développés et déployés souffrent souvent de défauts de conception entraînant un cycle interminable entre découverte d’attaques et amélioration des protocoles.Pour en sortir, nous prônons l’analyse mécanisée de ces protocoles par des méthodes formelles qui, via leurs fondements mathématiques, permettent une analyse rigoureuse apportant des garanties fortes sur la sécurité attendue. Parmi celles-ci, la vérification dans le modèle symbolique offre de grandes opportunités d’automatisation. La plupart des propriétés en lien avec la vie privée sont alors modélisées par l’équivalence entre deux systèmes. Toutefois, vérifier cette équivalence est indécidable dans le cas général. Deux approches ont alors émergé. Premièrement, pour un nombre borné de sessions d’un protocole, il est possible de symboliquement explorer toutes ses exécutions possibles et d’en déduire des procédures de décision pour l’équivalence. Deuxièmement, il existe des méthodes de semi-décision du problème dans le cas général qui exploitent des abstractions, notamment en considérant une forme forte d’équivalence.Nous avons identifié un problème majeur pour chacune des deux méthodes de l’état de l’art qui limitent grandement leur impact en pratique. Premièrement, les méthodes et outils qui explorent symboliquement les exécutions souffrent de l’explosion combinatoire du nombre d’états, causée par la nature concurrente des systèmes étudiés. Deuxièmenent, dans le cas non borné, la forme forte d’équivalence considérée se trouve être trop imprécise pour vérifier certaines propriétés telle que la non traçabilité, rendant cette approche inopérante pour ces propriétés.Dans cette thèse, nous proposons une solution à chacun des problèmes. Ces solutions prennent la forme de contributions théoriques, mais nous nous efforçons de les mettre en pratique via des implémentations afin de confirmer leurs impacts pratiques qui se révèlent importants.Tout d’abord, nous développons des méthodes de réduction d’ordres partiels pour réduire drastiquement le nombre d’états à explorer tout en s’assurant que l’on ne perd pas d’attaques. Nos méthodes sont conçues pour le cadre exigeant de la sécurité et sont prouvées correctes et complètes vis-à-vis de l’équivalence. Nous montrons comment elles peuvent s’intégrer naturellement dans les outils existants. Nous prouvons la correction de cette intégration dans un outil et proposons une implémentation complète. Finalement, nous mesurons le gain significatif en efficacité ainsi obtenu et nous en déduisons que nos méthodes permettent l’analyse d’un plus grand nombre de protocoles.Ensuite, pour résoudre le problème de précision dans le cas non-borné, nous proposons une nouvelle démarche qui consiste à assurer la vie privée via des conditions suffisantes. Nous définissons deux propriétés qui impliquent systématiquement la non-traçabilité et l’anonymat et qui sont facilement vérifiables via les outils existants (e.g. ProVerif). Nous avons implémenté un nouvel outil qui met en pratique cette méthode résolvant le problème de précision de l’état de l’art pour une large classe de protocoles. Cette nouvelle approche a permis les premières analyses de plusieurs protocoles industriels incluant des protocoles largement déployés, ainsi que la découverte de nouvelles attaques. / The information society we belong to heavily relies on secure information exchanges. To exchange information securely, one should use security protocols that specify how communicating agents should behave notably by using cryptographic primitives (e.g. encryption, signature). Given their ubiquitous and critical nature, we need to reach an extremely high level of confidence that they actually meet their goals. Those goals can be various and depend on the usage context but, more and more often, they include privacy properties (e.g. anonymity, unlinkability). Unfortunately, designed and deployed security protocols are often flawed and critical attacks are regularly disclosed, even on protocols of utmost importance, leading to the never-ending cycle between attack and fix.To break the present stalemate, we advocate the use of formal methods providing rigorous, mathematical frameworks and techniques to analyse security protocols. One such method allowing for a very high level of automation consists in analysing security protocols in the symbolic model and modelling privacy properties as equivalences between two systems. Unfortunately, deciding such equivalences is actually undecidable in the general case. To circumvent undecidability, two main approaches have emerged. First, for a bounded number of agents and sessions of the security protocol to analyse, it is possible to symbolically explore all possible executions yielding decision procedures for equivalence between systems. Second, for the general case, one can semi-decide the problem leveraging dedicated abstractions, notably relying on a strong form of equivalence (i.e. diff-equivalence).The two approaches, i.e. decision for the bounded case or semi-decision for the unbounded case, suffer from two different problems that significantly limit their practical impact. First, (symbolically) exploring all possible executions leads to the so-called states space explosion problem caused by the concurrency nature of security protocols. Concerning the unbounded case, diff-equivalence is actually too imprecise to meaningfully analyse some privacy properties such as unlinkability, nullifying methods and tools relying on it for such cases.In the present thesis, we address those two problems, going back and forth between theory and practice. Practical aspects motivate our work but our solutions actually take the form of theoretical developments. Moreover, we make the effort to confirm practical relevance of our solutions by putting them into practice (implementations) on real-world case studies (analysis of real-world security protocols).First, we have developed new partial order reduction techniques in order to dramatically reduce the number of states to explore without loosing any attack. We design them to be compatible with equivalence verification and such that they can be nicely integrated in frameworks on which existing procedures and tools are based. We formally prove the soundness of such an integration in a tool and provide a full implementation. We are thus able to provide benchmarks showing dramatic speedups brought by our techniques and conclude that more protocols can henceforth be analysed.Second, to solve the precision issue for the unbounded case, we propose a new methodology based on the idea to ensure privacy via sufficient conditions. We present two conditions that always imply unlinkability and anonymity that can be verified using existing tools (e.g. ProVerif). We implement a tool that puts this methodology into practice, hence solving the precision issue for a large class of protocols. This novel approach allows us to conduct the first formal analysis of some real-world protocols (some of them being widely deployed) and the discovery of novel attacks.
16

Preuves symboliques de propriétés d’indistinguabilité calculatoire / Symbolic Proofs of Computational Indistinguishability

Koutsos, Adrien 27 September 2019 (has links)
Notre société utilise de nombreux systèmes de communications. Parce que ces systèmes sont omniprésents et sont utilisés pour échanger des informations sensibles, ils doivent être protégés. Cela est fait à l'aide de protocoles cryptographiques. Il est crucial que ces protocoles assurent bien les propriétés de sécurité qu'ils affirment avoir, car les échecs peuvent avoir des conséquences importantes. Malheureusement, concevoir des protocoles cryptographiques est notoirement difficile, comme le montre la régularité avec laquelle de nouvelles attaques sont découvertes. Nous pensons que la vérification formelle est le meilleur moyen d'avoir de bonnes garanties dans la sécurité d'un protocole: il s'agit de prouver mathématiquement qu'un protocole satisfait une certaine propriété de sécurité.Notre objectif est de développer les techniques permettant de vérifier formellement des propriétés d'équivalence sur des protocoles cryptographiques, en utilisant une méthode qui fournit de fortes garanties de sécurités, tout en étant adaptée à des procédures de preuve automatique. Dans cette thèse, nous défendons l'idée que le modèle Bana-Comon pour les propriétés d'équivalences satisfait ces objectifs. Nous soutenons cette affirmation à l'aide de trois contributions.Tout d'abord, nous étayons le modèle Bana-Comon en concevant des axiomes pour les fonctions usuelles des protocoles de sécurités, et pour plusieurs hypothèses cryptographiques. Dans un second temps, nous illustrons l'utilité de ces axiomes et du modèle en réalisant des études de cas de protocoles concrets: nous étudions deux protocoles RFID, KCL et LAK, ainsi que le protocole d'authentification 5G-AKA, qui est utilisé dans les réseaux de téléphonie mobile. Pour chacun de ces protocoles, nous montrons des attaques existentes ou nouvelles, proposons des versions corrigées de ces protocoles, et prouvons que celles-ci sont sécurisées. Finalement, nous étudions le problème de l'automatisation de la recherche de preuves dans le modèle Bana-Comon. Pour cela, nous prouvons la décidabilité d'un ensemble de règles d'inférences qui est une axiomatisation correcte, bien que incomplète, de l'indistingabilité calculatoire, lorsque l'on utilise un schéma de chiffrement IND-CCA2. Du point de vue d'un cryptographe, cela peut être interprété comme la décidabilité d'un ensemble de transformations de jeux. / Our society extensively relies on communications systems. Because such systems are used to exchange sensitive information and are pervasive, they need to be secured. Cryptographic protocols are what allow us to have secure communications. It is crucial that such protocols do not fail in providing the security properties they claim, as failures have dire consequences. Unfortunately, designing cryptographic protocols is notoriously hard, and major protocols are regularly and successfully attacked. We argue that formal verification is the best way to get a strong confidence in a protocol security. Basically, the goal is to mathematically prove that a protocol satisfies some security property.Our objective is to develop techniques to formally verify equivalence properties of cryptographic protocols, using a method that provides strong security guarantees while being amenable to automated deduction techniques. In this thesis, we argue that the Bana-Comon model for equivalence properties meets these goals. We support our claim through three different contributions.First, we design axioms for the usual functions used in security protocols, and for several cryptographic hypothesis. Second, we illustrate the usefulness of these axioms and of the model by completing case studies of concrete protocols: we study two RFID protocols, KCL et LAK, as well as the 5G-AKA authentication protocol used in mobile communication systems. For each of these protocols, we show existing or new attacks against current versions, propose fixes, and prove that the fixed versions are secure. Finally, we study the problem of proof automation in the Bana-Comon model, by showing the decidability of a set of inference rules which is a sound, though incomplete, axiomatization of computational indistinguishability when using an IND-CCA2 encryption scheme. From a cryptographer's point of view, this can be seen as the decidability of a fixed set of cryptographic game transformations.
17

Evoluční knihovna pro podporu návrhu komunikačních protokolů / Evolutionary Library for the Communication Protocols Design

Sameš, Martin Unknown Date (has links)
Developement and verification of new security protocols, which meets the requirements, needs automated techniques. This work deals with the possibility of using evolutionary approach in design of security protocols. By showing and comparing different methods and using some of them to create evolutionary library for support in developement of new communication protocols.
18

Developing Strand Space Based Models And Proving The Correctness Of The Ieee 802.11i Authentication Protocol With Restricted Sec

Furqan, Zeeshan 01 January 2007 (has links)
The security objectives enforce the security policy, which defines what is to be protected in a network environment. The violation of these security objectives induces security threats. We introduce an explicit notion of security objectives for a security protocol. This notion should precede the formal verification process. In the absence of such a notion, the security protocol may be proven correct despite the fact that it is not equipped to defend against all potential threats. In order to establish the correctness of security objectives, we present a formal model that provides basis for the formal verification of security protocols. We also develop the modal logic, proof based, and multi-agent approaches using the Strand Space framework. In our modal logic approach, we present the logical constructs to model a protocol's behavior in such a way that the participants can verify different security parameters by looking at their own run of the protocol. In our proof based model, we present a generic set of proofs to establish the correctness of a security protocol. We model the 802.11i protocol into our proof based system and then perform the formal verification of the authentication property. The intruder in our model is imbued with powerful capabilities and repercussions to possible attacks are evaluated. Our analysis proves that the authentication of 802.11i is not compromised in the presented model. We further demonstrate how changes in our model will yield a successful man-in-the-middle attack. Our multi-agent approach includes an explicit notion of multi-agent, which was missing in the Strand Space framework. The limitation of Strand Space framework is the assumption that all the information available to a principal is either supplied initially or is contained in messages received by that principal. However, other important information may also be available to a principal in a security setting, such as a principal may combine information from different roles played by him in a protocol to launch a powerful attack. Our presented approach models the behavior of a distributed system as a multi-agent system. The presented model captures the combined information, the formal model of knowledge, and the belief of agents over time. After building this formal model, we present a formal proof of authentication of the 4-way handshake of the 802.11i protocol.
19

Extending Distributed Temporal Protocol Logic To A Proof Based Framework For Authentication Protocols

Muhammad, Shahabuddin 01 January 2007 (has links)
Running critical applications, such as e-commerce, in a distributed environment requires assurance of the identities of the participants communicating with each other. Providing such assurance in a distributed environment is a difficult task. The goal of a security protocol is to overcome the vulnerabilities of a distributed environment by providing a secure way to disseminate critical information into the network. However, designing a security protocol is itself an error-prone process. In addition to employing an authentication protocol, one also needs to make sure that the protocol successfully achieves its authentication goals. The Distributed Temporal Protocol Logic (DTPL) provides a language for formalizing both local and global properties of distributed communicating processes. The DTPL can be effectively applied to security protocol analysis as a model checker. Although, a model checker can determine flaws in a security protocol, it can not provide proof of the security properties of a protocol. In this research, we extend the DTPL language and construct a set of axioms by transforming the unified framework of SVO logic into DTPL. This results into a deductive style proof-based framework for the verification of authentication protocols. The proposed framework represents authentication protocols and concisely proves their security properties. We formalize various features essential for achieving authentication, such as message freshness, key association, and source association in our framework. Since analyzing security protocols greatly depends upon associating a received message to its source, we separately analyze the source association axioms, translate them into our framework, and extend the idea for public-key protocols. Developing a proof-based framework in temporal logic gives us another verification tool in addition to the existing model checker. A security property of a protocol can either be verified using our approach, or a design flaw can be identified using the model checker. In this way, we can analyze a security protocol from both perspectives while benefiting from the representation of distributed temporal protocol logic. A challenge-response strategy provides a higher level of abstraction for authentication protocols. Here, we also develop a set of formulae using the challenge-response strategy to analyze a protocol at an abstract level. This abstraction has been adapted from the authentication tests of the graph-theoretic approach of strand space method. First, we represent a protocol in logic and then use the challenge-response strategy to develop authentication tests. These tests help us find the possibility of attacks on authentication protocols by investigating the originator of its received messages. Identifying the unintended originator of a received message indicates the existence of possible flaws in a protocol. We have applied our strategy on several well-known protocols and have successfully identified the attacks.
20

Usability and security of human-interactive security protocols

Kainda, Ronald January 2011 (has links)
We investigate the security and usability of Human-Interactive Security Protocols (HISPs); specifically, how digests of 4 or more digits can be compared between two or more sys- tems as conveniently as possible while ensuring that issues such as user complacency do not compromise security. We address the research question: given different association scenarios and modes of authentication in HISPs, how can we improve on existing, or design new, empirical channels that suit human and contextual needs to achieve acceptable effective security? We review the literature of HISPs, proposed empirical channels,and usability studies of HISPs; we follow by presenting the methodology of the research reported in this thesis. We then make a number of contributions discussing the effectiveness of empirical channels and address the design, analysis, and evaluation of these channels. In Chapter 4 we present a user study of pairwise device associations and discuss the factors affecting effective security of empirical channels in single-user scenarios. In Chapter 5 we present a user study of group device associations and discuss the factors affecting effective security of empirical channels in multi-user scenarios. In Chapter 7 we present a framework designed for researchers and system designers to reason about empirical channels in HISPs. The framework is grounded in experimental data, related research, and validated by experts. In Chapter 8 we present a methodology for analysing and evaluating the security and usability of HISPs. We validate the methodology by applying it in laboratory experiments of HISPs. Finally, in Chapter 6 we present a set of principles for designing secure and usable empirical channels. We demonstrate the effectiveness of these principles by proposing new empirical channels.

Page generated in 0.0708 seconds