Spelling suggestions: "subject:"computer networks -- access control."" "subject:"computer networks -- cccess control.""
1 |
Access control in decentralized, distributed systemsKane, Kevin Michael 28 August 2008 (has links)
Not available / text
|
2 |
An anonymity scheme for file retrieval systemsTang, Wai-hung, 鄧偉雄 January 2008 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
3 |
Medium access control protocols for next generation wireless networksWang, Xudong 05 1900 (has links)
No description available.
|
4 |
E-commerce and its derived applications: smart card certificate system and recoverable and untraceable electronic cash.January 2001 (has links)
by Liu Kai Sui. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 67-71). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Security and E-commerce --- p.3 / Chapter 1.2 --- E-commerce: More than Commercial Activities --- p.4 / Chapter 1.3 --- What This Thesis Contains --- p.5 / Chapter 2. --- Introduction to Cryptographic Theories --- p.7 / Chapter 2.1 --- Six Cryptographic Primitives --- p.7 / Chapter 2.1.1 --- Symmetric Encryption --- p.8 / Chapter 2.1.2 --- Asymmetric Encryption --- p.8 / Chapter 2.1.3 --- Digital Signature --- p.9 / Chapter 2.1.4 --- Message Digest --- p.9 / Chapter 2.1.5 --- Digital Certificate and Certificate Authority --- p.10 / Chapter 2.1.6 --- Zero-Knowledge Proof --- p.11 / Chapter 2.2 --- The RSA Public Key Cryptosystem --- p.12 / Chapter 2.3 --- The ElGamal Public Key Encryption Scheme --- p.13 / Chapter 2.4 --- Elliptic Curve Cryptosystem --- p.14 / Chapter 2.4.1 --- The Algorithm of Elliptic Curve Cryptosystem --- p.15 / Chapter 2.5 --- Different kinds of Digital Signature --- p.16 / Chapter 2.5.1 --- RSA Digital Signature --- p.16 / Chapter 2.5.2 --- Elliptic Curve Nyberg-Rueppel Digital Signature --- p.16 / Chapter 2.6 --- Blind Signature --- p.17 / Chapter 2.7 --- Cut-and-choose protocol --- p.18 / Chapter 2.8 --- Diffie-Hellman Key Exchange --- p.19 / Chapter 3. --- "Introduction to E-commerce, M-commerce and Rich Media M-commerce" --- p.20 / Chapter 3.1 --- 1st Generation of E-commerce --- p.21 / Chapter 3.2 --- 2nd Generation of E-commerce ´ؤ M-commerce --- p.21 / Chapter 3.3 --- 3rd Generation of E-commerce - Rich Media M-commerce --- p.23 / Chapter 3.4 --- Payment Systems used in E-commerce --- p.23 / Chapter 3.4.1 --- Electronic Cash --- p.23 / Chapter 3.4.2 --- Credit Card --- p.24 / Chapter 3.4.3 --- Combined Payment System --- p.24 / Chapter 4. --- Introduction to Smart Card --- p.25 / Chapter 4.1 --- What is Smart Card? --- p.25 / Chapter 4.2 --- Advantages of Smart Cards --- p.26 / Chapter 4.2.1 --- Protable Device --- p.26 / Chapter 4.2.2 --- Multi-applications --- p.26 / Chapter 4.2.3 --- Computation Power --- p.26 / Chapter 4.2.4 --- Security Features --- p.27 / Chapter 4.3 --- What can Smart Cards Do? --- p.27 / Chapter 4.4 --- Java Card --- p.28 / Chapter 5. --- A New Smart Card Certificate System --- p.30 / Chapter 5.1 --- Introduction --- p.31 / Chapter 5.2 --- Comparison between RSA and ECC --- p.32 / Chapter 5.3 --- System Architecture --- p.33 / Chapter 5.3.1 --- System Setup --- p.33 / Chapter 5.3.2 --- Apply for a certificate --- p.34 / Chapter 5.3.3 --- Verification of Alice --- p.35 / Chapter 5.3.4 --- "Other Certificates ´ؤ the ""Hyper-Link"" concept" --- p.36 / Chapter 5.3.4.1 --- "Generation of the ""hyper-link""" --- p.37 / Chapter 5.3.4.2 --- "Verification ofAlice using the ""hyper-link""" --- p.37 / Chapter 5.3.5 --- Multiple Applications --- p.38 / Chapter 5.4 --- Security Analysis --- p.39 / Chapter 5.4.1 --- No Crypto-processor is needed --- p.40 / Chapter 5.4.2 --- PIN Protect --- p.40 / Chapter 5.4.3 --- Digital Certificate Protect --- p.40 / Chapter 5.4.4 --- Private Key is never left the smart card --- p.41 / Chapter 5.5 --- Extensions --- p.41 / Chapter 5.5.1 --- Biometrics Security --- p.41 / Chapter 5.5.2 --- E-Voting --- p.41 / Chapter 5.6 --- Conclusion --- p.42 / Chapter 6. --- Introduction to Electronic Cash --- p.44 / Chapter 6.1 --- Introduction --- p.44 / Chapter 6.2 --- The Basic Requirements --- p.45 / Chapter 6.3 --- Advantages of Electronic Cash over other kinds of payment systems --- p.46 / Chapter 6.3.1 --- Privacy --- p.46 / Chapter 6.3.2 --- Off-line payment --- p.47 / Chapter 6.3.3 --- Suitable for Small Amount Payment --- p.47 / Chapter 6.4 --- Basic Model of Electronic Cash --- p.48 / Chapter 6.5 --- Examples of Electronic Cash --- p.49 / Chapter 6.5.1 --- eCash --- p.49 / Chapter 6.5.2 --- Mondex --- p.49 / Chapter 6.5.3 --- Octopus Card --- p.50 / Chapter 7. --- A New Recoverable and Untraceable Electronic Cash --- p.51 / Chapter 7.1 --- Introduction --- p.52 / Chapter 7.2 --- The Basic Idea --- p.52 / Chapter 7.3 --- S. Brand's Single Term E-cash Protocol --- p.54 / Chapter 7.3.1 --- The Setup of the System --- p.54 / Chapter 7.3.2 --- The Withdrawal Protocol --- p.54 / Chapter 7.3.3 --- The Payment Protocol --- p.55 / Chapter 7.3.4 --- The Deposit Protocol --- p.56 / Chapter 7.4 --- The Proposed Protocol --- p.57 / Chapter 7.4.1 --- The Withdrawal Protocol --- p.57 / Chapter 7.4.2 --- The Payment Protocol --- p.58 / Chapter 7.4.3 --- The Deposit Protocol --- p.58 / Chapter 7.4.4. --- The Recovery Protocol --- p.59 / Chapter 7.5 --- Security Analysis --- p.60 / Chapter 7.5.1 --- Conditional Untraceability --- p.60 / Chapter 7.5.2 --- Cheating --- p.60 / Chapter 7.6 --- Extension --- p.60 / Chapter 7.7 --- Conclusion --- p.62 / Chapter 8. --- Conclusion --- p.63 / Appendix: Paper derived from this thesis --- p.66 / Bibliography --- p.67
|
5 |
On tracing attackers of distributed denial-of-service attack through distributed approaches. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
For the macroscopic traceback problem, we propose an algorithm, which leverages the well-known Chandy-Lamport's distributed snapshot algorithm, so that a set of border routers of the ISPs can correctly gather statistics in a coordinated fashion. The victim site can then deduce the local traffic intensities of all the participating routers. Given the collected statistics, we provide a method for the victim site to locate the attackers who sent out dominating flows of packets. Our finding shows that the proposed methodology can pinpoint the location of the attackers in a short period of time. / In the second part of the thesis, we study a well-known technique against the microscopic traceback problem. The probabilistic packet marking (PPM for short) algorithm by Savage et al. has attracted the most attention in contributing the idea of IP traceback. The most interesting point of this IP traceback approach is that it allows routers to encode certain information on the attack packets based on a pre-determined probability. Upon receiving a sufficient number of marked packets, the victim (or a data collection node) can construct the set of paths the attack packets traversed (or the attack graph), and hence the victim can obtain the locations of the attackers. In this thesis, we present a discrete-time Markov chain model that calculates the precise number of marked packets required to construct the attack graph. / The denial-of-service attack has been a pressing problem in recent years. Denial-of-service defense research has blossomed into one of the main streams in network security. Various techniques such as the pushback message, the ICMP traceback, and the packet filtering techniques are the remarkable results from this active field of research. / The focus of this thesis is to study and devise efficient and practical algorithms to tackle the flood-based distributed denial-of-service attacks (flood-based DDoS attack for short), and we aim to trace every location of the attacker. In this thesis, we propose a revolutionary, divide-and-conquer trace-back methodology. Tracing back the attackers on a global scale is always a difficult and tedious task. Alternatively, we suggest that one should first identify Internet service providers (ISPs) that contribute to the flood-based DDoS attack by using a macroscopic traceback approach . After the concerned ISPs have been found, one can narrow the traceback problem down, and then the attackers can be located by using a microscopic traceback approach. / Though the PPM algorithm is a desirable algorithm that tackles the microscopic traceback problem, the PPM algorithm is not perfect as its termination condition is not well-defined in the literature. More importantly, without a proper termination condition, the traceback results could be wrong. In this thesis, we provide a precise termination condition for the PPM algorithm. Based on the precise termination condition, we devise a new algorithm named the rectified probabilistic packet marking algorithm (RPPM algorithm for short). The most significant merit of the RPPM algorithm is that when the algorithm terminates, it guarantees that the constructed attack graph is correct with a specified level of confidence. Our finding shows that the RPPM algorithm can guarantee the correctness of the constructed attack graph under different probabilities that the routers mark the attack packets and different structures of the network graphs. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means to enhance the reliability of the PPM algorithm. / Wong Tsz Yeung. / "September 2007." / Adviser: Man Hon Wong. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4867. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 176-185). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
6 |
Knowledge based anomaly detectionPrayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
|
7 |
Knowledge based anomaly detectionPrayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
|
8 |
Knowledge based anomaly detectionPrayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
|
9 |
Critical information infrastructure protection for developing countriesEllefsen, Ian David 16 August 2012 (has links)
D.Phil.(Computer Science) / In this thesis we will investigate the development of Critical Information Infrastructure Protection (CIIP) structures in the developing world. Developing regions are experiencing fast-paced development of information infrastructures, and improvements in related technologies such as Internet connectivity and wireless technologies. The use of these new technologies and the number of new users that are introduced to the Internet can allow cyber threats to flourish. In many cases, Computer Security Incident Response Teams (CSIRTs) can be used to provide CIIP. However, the development of traditional CSIRT-like structures can be problematic in developing regions where technological challenges, legal frameworks, and limited capacity can reduce its overall effectiveness. In this thesis we will introduce the Community-oriented Security, Advisory and Warning (C-SAW) Team. This model is designed to address the challenges to CIIP faced by developing regions by defining a structure that is loosely-coupled and flexible in nature. Furthermore, the aspect of community-orientation is used to allow a C-SAW Team to operate within a designated community of members. This thesis is divided into three primary parts. In Part 1 we will discuss the background research undertaken during this study. The background chapters will lay the foundation for the later chapters in this thesis. In Part 2 we will introduce the C-SAW Team model and elaborate on the construction, relationships, positioning, services, and framework in which it can be deployed. Finally, in Part 3 we present our conclusions to this thesis.
|
10 |
Mitigation of network tampering using dynamic dispatch of mobile agentsRocke, Adam Jay 01 January 2004 (has links)
No description available.
|
Page generated in 0.0643 seconds