• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 45
  • 29
  • Tagged with
  • 381
  • 105
  • 79
  • 43
  • 37
  • 33
  • 29
  • 29
  • 25
  • 23
  • 22
  • 22
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interdomain user authentication and privacy

Pashalidis, Andreas January 2006 (has links)
This thesis looks at the issue of interdomain user authentication, i.e. user authentication in systems that extend over more than one administrative domain. It is divided into three parts. After a brief overview of related literature, the first part provides a taxonomy of current approaches to the problem. The taxonomy is first used to identify the relative strengths and weaknesses of each approach, and then employed as the basis for putting into context four concrete and novel schemes that are subsequently proposed in this part of the thesis. Three of these schemes build on existing technology; the first on 2nd and 3rd-generation cellular (mobile) telephony, the second on credit/debit smartcards, and the third on Trusted Computing. The fourth scheme is, in certain ways, different from the others. Most notably, unlike the other three schemes, it does not require the user to possess tamper-resistant hardware, and it is suitable for use from an untrusted access device. An implementation of the latter scheme (which works as a web proxy) is also described in this part of the thesis. As the need to preserve one’s privacy continues to gain importance in the digital world, it is important to enhance user authentication schemes with properties that enable users to remain anonymous (yet authenticated). In the second part of the thesis, anonymous credential systems are identified as a tool that can be used to achieve this goal. A formal model that captures relevant security and privacy notions for such systems is proposed. From this model, it is evident that there exist certain inherent limits to the privacy that such systems can offer. These are examined in more detail, and a scheme is proposed that mitigates the exposure to certain attacks that exploit these limits in order to compromise user privacy. The second part of the thesis also shows how to use an anonymous credential system in order to facilitate what we call ‘privacy-aware single sign-on’ in an open environment. The scheme enables the user to authenticate himself to service providers under separate identifier, where these identifiers cannot be linked to each other, even if all service providers collude. It is demonstrated that the anonymity enhancement scheme proposed earlier is particularly suited in this special application of anonymous credential systems. Finally, the third part of the thesis concludes with some open research questions.
2

Rank codes and their applications to communication security

Khan, Eraj January 2012 (has links)
Today, computer networks are utilized for the sharing of information and resources more than ever before. Data transmitted over any network can exposed to many devious activities. To protect the information flowing through these networks involves the design and implementation of systems that maintain security. The aim of this thesis is to investigate Rank codes for implementing communication security techniques for different application areas. This thesis can be divided into three parts. Each of these parts are summarized below: Wireless sensor networks are increasingly becoming viable solutions to many challenging problems. Security is one of the main issues in some of the application areas of wireless sensor networks such as military and Supervisory Control and Data Acquisition (SCADA) applications. Key distribution is a fundamental prerequisite for secure communication in any network. Due to the inherent resource and computation constraints of sensor nodes, link key establishment among the nodes is non-trivial. Numerous key exchange schemes have been proposed so far but key pre-distribution schemes are perhaps well suited for large scale deployments of resource constrained sensor networks. We have studied different key pre-distribution schemes and proposed a scheme based on the generator matrix of maximum rank codes.
3

A Search-Based Framework for Security Protocol Synthesis

Chen, Hao January 2007 (has links)
Security protocol verification has been the area where the bulk of the research in cryptographic protocols has taken place and a number of successful supporting tools have been developed. However, not much research has been done in the area of applying formal methods to the design of cryptographic protocols in the first place, despite wide recognition that the design of cryptographic protocols is very difficult. Most existing protocols have been designed using informal methods and heavily rely on the verification process to pick up vulnerabilities. The research reported in this thesis shows how to automatically synthesise abstract protocols using heuristic search, explains how to add high-level efficiency concerns to the synthesis, and demonstrates how to refine the abstract protocols to executable Java Code.
4

Certificate validation in untrusted domains

Batarfi, Omar Abdullah January 2007 (has links)
Authentication is a vital part of establishing secure, online transactions and Public key Infrastructure (PKI) plays a crucial role in this process for a relying party. A PKI certificate provides proof of identity for a subject and it inherits its trustworthiness from the fact that its issuer is a known (trusted) Certification Authority (CA) that vouches for the binding between a public key and a subject's identity. Certificate Policies (CPs) are the regulations recognized by PKI participants and they are used as a basis for the evaluation of the trust embodied in PKI certificates. However, CPs are written in natural language which can lead to ambiguities, spelling errors, and a lack of consistency when describing the policies. This makes it difficult to perform comparison between different CPs. This thesis offers a solution to the problems that arise when there is not a trusted CA to vouch for the trust embodied in a certificate. With the worldwide, increasing number of online transactions over Internet, it has highly desirable to find a method for authenticating subjects in untrusted domains. The process of formalisation for CPs described in this thesis allows their semantics to be described. The formalisation relies on the XML language for describing the structure of the CP and the formalization process passes through three stages with the outcome of the last stage being 27 applicable criteria. These criteria become a tool assisting a relying party to decide the level of trust that he/she can place on a subject certificate. The criteria are applied to the CP of the issuer of the subject certificate. To test their validity, the criteria developed have been examined against the UNCITRAL Model Law for Electronic Signatures and they are able to handle the articles of the UNCITRAL law. Finally, a case study is conducted in order to show the applicability of the criteria. A real CPs have been used to prove their applicability and convergence. This shows that the criteria can handle the correspondence activities defined in a real CPs adequately.
5

Using a loadtime metaobject protocol to enforce access control policies upon user-level compiled code

Welch, Ian Shawn January 2005 (has links)
This thesis evaluates the use of a loadtime metaobject protocol as a practical mechanism for enforcing access control policies upon applications distributed as user-level compiled code. Enforcing access control policies upon user-level compiled code is necessary because there are many situations where users are vulnerable to security breaches because they download and run potentially untrustworthy applications provided in the form of user-level compiled code. These applications might be distributed applications so access control for both local and distributed resources is required. Examples of potentially untrustworthy applications are Browser plug-ins, software patches, new applications, or Internet computing applications such as SETI@home. Even applications from trusted sources might be malicious or simply contain bugs that can be exploited by attackers so access control policies must be imposed to prevent the misuse of resources. Additionally, system administrators might wish to enforce access control policies upon these applications to ensure that users use them in accordance with local security requirements. Unfortunately, applications developed externally may not include the necessary enforcement code to allow the specification of organisation-specific access control policies. Operating system security mechanisms are too coarse-grained to enforce security policies on applications implemented as user-level code. Mechanisms that control access to both user-level and operating system-level resources are required for access control policies but operating system mechanisms only focus on controlling access to system-level objects. Conventional object-oriented software engineering can be used to use existing security architectures to enforce access control on user-level resources as well as system-resources. Common techniques are to insert enforcement within libraries or applications, use inheritance and proxies. However, these all provide a poor separation of concerns and cannot be used with compiled code. In-lined reference monitors provide a good separation of concerns and meet criteria for good security engineering. They use object code rewriting to control access to both userlevel and system-level objects by in-lining reference monitor code into user-level compiled code. However, their focus is upon replacing existing security architectures and current implementations do not address distributed access control policies. Another approach that does provide a good separation of concerns and allows reuse of existing security architectures are metaobject protocols. These allow constrained changes to be made to the semantics of code and therefore can be used to implement access control policies for both local and distributed resources. Loadtime metaobject protocols allow metaobject protocols to be used with compiled code because they rewrite base level classes and insert meta-level interceptions. However, these have not been demonstrated to meet requirements for good security engineering such as complete mediation. Also current implementations do not provide distributed access control. This thesis implements a loadtime metaobject protocol for the Java programming language. The design of the metaobject protocol specifically addresses separation of concerns, least privilege, complete mediation and economy of mechanism. The implementation of the metaobject protocol, called Kava, has been evaluated by implementing diverse security policies in two case studies involving third-party standalone and distributed applications. These case studies are used as the basis of inferences about general suitability of using loadtime reflection for enforcing access control policies upon user-level compiled code.
6

Design and implementation of extensible middleware for non-repudiable interactions

Robinson, Paul Fletcher January 2006 (has links)
Non-repudiation is an aspect of security that is concerned with the creation of irrefutable audits of an interaction. Ensuring the audit is irrefutable and verifiable by a third party is not a trivial task. A lot of supporting infrastructure is required which adds large expense to the interaction. This infrastructure comprises, (i) a non-repudiation aware run-time environment, (ii) several purpose built trusted services and (iii) an appropriate non-repudiation protocol. This thesis presents design and implementation of such an infrastructure. The runtime environment makes use of several trusted services to achieve external verification of the audit trail. Non-repudiation is achieved by executing fair non-repudiation protocols. The Fairness property of the non-repudiation protocol allows a participant to protect their own interests by preventing any party from gaining an advantage by misbehaviour. The infrastructure has two novel aspects; extensibility and support for automated implementation of protocols. Extensibility is achieved by implementing the infrastructure in middleware and by presenting a large variety of non-repudiable business interaction patterns to the application (a non-repudiable interaction pattern is a higher level protocol composed from one or more non-repudiation protocols). The middleware is highly configurable allowing new non-repudiation protocols and interaction patterns to be easily added, without disrupting the application. This thesis presents a rigorous mechanism for automated implementation of non-repudiation protocols. This ensures that the protocol being executed is that which was intended and verified by the protocol designer. A family of non-repudiation protocols are taken and inspected. This inspection allows a set of generic finite state machines to be produced. These finite state machines can be used to maintain protocol state and manage the sending and receiving of appropriate protocol messages. A concrete implementation of the run-time environment and the protocol generation techniques is presented. This implementation is based on industry supported Web service standards and services.
7

Security management for services that are integrated across enterprise boundaries

Aljareh, Salem Sultan January 2004 (has links)
This thesis addresses the problem of security management for services that are integrated across enterprise boundaries, as typically found in multi-agency environments. We consider the multi-agency environment as a collaboration network. The Electronic Health Record is a good example of an application in the multi-agency service environment, as there are different authorities claiming rights to access the personal and medical data of a patient. In this thesis we use the Electronic Health Record as the main context. Policies are determined by security goals, goals in turn are determined by regulations and laws. In general goals can be subtle and difficult to formalise, especially across admin boundaries as with the Electronic Health Record. Security problems may result when designers attempt to apply general principles to cases that have subtleties in the full detail. It is vital to understand such subtleties if a robust solution is to be achieved Existing solutions are limited in that they tend only to deal with pre- determined goals and fail to address situations in which the goals need to be negotiated. The task-based approach seems well suited to addressing this. This work is structured in five parts. In the first part we review current declarations, legislation and regulations to bring together a global, European and national perspective for security in health services and we identify requirements. In the second part we investigate a proposed solution for security in the Health Service by examining the BMA (British Medical Association) model. The third part is a development of a novel task-based CTCP ICTRP model based on two linked protocols. The Collaboration Task Creation Protocol (CTCP) establishes a framework for handling a request for information and the Collaboration Task Runtime Protocol (CTRP) runs the request under the supervision of CTCP. In the fourth part we validate the model against the Data Protection Act and the Caldicott Principles and review for technical completeness and satisfaction of software engineering principles. Finally in the fifth part we apply the model to two case studies in the multi- agency environment a simple one (Dynamic Coalition) for illustration purposes and a more complex one (Electronic Health Record) for evaluating the model's coverage, neutrality and focus, and exception handling.
8

Intelligent agents-based networks security

Abouzakhar, Nasser Salem January 2005 (has links)
The growing dependence of modem society on telecommunication and information networks has become inevitable. The increase in the number of networks interconnected over the Internet has led to an increase in security threats. The existing mobile and fixed network systems and telecommunication protocols are not appropriately designed to deal with current developed distributed attacks. I started my research work by exploring the deployment of intelligent Agents that could detect network anomalies and issue automated response actions. An Intelligent Agent (IA) [Knapik et at, 1998] is an entity that carries out some set of operations on behalf of a user or other software with some degree of independence or autonomy. The investigation of the Agents paradigm led to a deep understanding of the underlying problem; therefore, machine learning has turned my attention to Bayesian learning and Fuzzy logic approaches. A modelled network intrusion detector has been proposed. This model sets Agents with learning capabilities for detecting current as well as similar future distributed network attacks. In order to detect those anomalies as early as possible, the Bayesian network approach has been proposed. This approach is considered to be a promising method in determining suspicious network anomaly events that consequently relates them to subsequent dependent illegitimate activities. This research suggests innovative ways to develop Intelligent Agents that incorporate Bayesian learning to address network security risks associated with the current Networks Intrusion Detection Systems (NIDSs) designs and implementations. Because NIDSs have traditionally focused on detecting attacks, and while detection serves a vital purpose, it does not provide the ultimate solution. As aresult, an effective response mechanism to those detected attacks is required to minimise their effect and hence enhance NIDSs capabilities. Therefore, other Agents with Fuzzy intelligence capabilities have been proposed to initiate successful automated response actions. Fuzzy Agents have been proposed to handle this task with the ability to respond quickly and dynamically control the availability of allocated network resources. The evaluation methodology used to assess the performance of the developed models has been concentrated on detecting as well as predicting unauthorised activities in networks. By means of evaluation and validation, as well as empirical evidence, we are able to determine the effectiveness of the developed models and assumptions. The performance of developed detection model algorithms for unsupervised learning tasks has been evaluated using well known standard methods such as Confusion matrix. The achieved results indicate that the developed model led to a substantial reduction of the false alarms, with significant increase in the detection rates. This research work is operating within the context of two domains the first drawn from the network security community and the other from the machine learning community. It investigates the deployment of both Bayesian Learning as a probabilistic approach and Fuzzy Intelligence as a possibilistic approach to networks security. This is to detect as well as predict future evolving network anomalies, and to effectively respond to those developed attacks and minimise their effects. Consequently, it may provide innovative solutions that can be implemented in a cost-effective manner.
9

Risk reduction through technological control of personal information

Atkinson, Shirley January 2007 (has links)
Abuse and harm to individuals, through harassment and bullying, coexist with Identity Theft as criminal behaviours supported by the ready availability of personal information. Incorporating privacy protection measures into software design requires a thorough understanding about how an individual's privacy is affected by Internet technologies. This research set out to incorporate such an understanding by examining privacy risks for two groups of individuals, for whom privacy was an important issue, domestic abuse survivors and teenagers. The purpose was to examine the reality of the privacy risks for these two groups. This research combined a number of approaches underpinned by a selection of foundation theories from four separate domains: software engineering; information systems; social science; and criminal behaviour. Semi-structured interviews, focus groups, workshops and questionnaires gathered information from managers of refuges and outreach workers from Women's Aid; representatives from probation and police domestic violence units; and teenagers. The findings from these first interactions provided specific examples of risks posed to the two groups. These findings demonstrated that there was a need for a selection of protection mechanisms that promoted awareness of the potential risk among vulnerable individuals. Emerging from these findings were a set of concepts that formed the basis of a novel taxonomy of threat framework designed to assist in risk assessment. To demonstrate the crossover between understanding the social environment and the use of technology, the taxonomy of threat was incorporated into a novel Vulnerability Assessment Framework, which in turn provided a basis for an extension to standard browser technology. A proof-of-concept prototype was implemented by creating an Internet Explorer 7.0 browser helper object. The prototype also made use of the Semantic Web protocols of Resource Description Framework and the Web Ontology Language for simple data storage and reasoning. The purpose of this combination was to demonstrate how the environment in which the individual primarily interacted with the Internet could be adapted to provide awareness of the potential risk, and to enable the individual to take steps to reduce that risk. Representatives of the user-groups were consulted for evaluation of the acceptability of the prototype approach. The favourable ratings given by the respondents demonstrated the acceptability of such an approach to monitoring personal information, with the provision that control remained with the individual. The evaluation exercise also demonstrated how the prototype would serve as a useful tool to make individuals aware of the dangers. The novel contribution of this research contains four facets: it advances understanding of privacy protection for the individual; illustrates an effective combination of methodology frameworks to address the complex issue of privacy; provides a framework for risk assessment through the taxonomy of threat; and demonstrates the novel vulnerability assessment framework through a proof-of-concept prototype.
10

User authentication and supervision in networked systems

Dowland, Paul Steven January 2004 (has links)
This thesis considers the problem of user authentication and supervision in networked systems. The issue of user authentication is one of on-going concern in modem IT systems with the increased use of computer systems to store and provide access to sensitive information resources. While the traditional username/password login combination can be used to protect access to resources (when used appropriately), users often compromise the security that these methods can provide. While alternative (and often more secure) systems are available, these alternatives usually require expensive hardware to be purchased and integrated into IT systems. Even if alternatives are available (and financially viable), they frequently require users to authenticate in an intrusive manner (e.g. forcing a user to use a biometric technique relying on fingerprint recognition). Assuming an acceptable form of authentication is available, this still does not address the problem of on-going confidence in the users’ identity - i.e. once the user has logged in at the beginning of a session, there is usually no further confirmation of the users' identity until they logout or lock the session in which they are operating. Hence there is a significant requirement to not only improve login authentication but to also introduce the concept of continuous user supervision. Before attempting to implement a solution to the problems outlined above, a range of currently available user authentication methods are identified and evaluated. This is followed by a survey conducted to evaluate user attitudes and opinions relating to login and continuous authentication. The results reinforce perceptions regarding the weaknesses of the traditional username/password combination, and suggest that alternative techniques can be acceptable. This provides justification for the work described in the latter part o f the thesis. A number of small-scale trials are conducted to investigate alternative authentication techniques, using ImagePIN's and associative/cognitive questions. While these techniques are of an intrusive nature, they offer potential improvements as either initial login authentication methods or, as a challenge during a session to confirm the identity of the logged-in user. A potential solution to the problem of continuous user authentication is presented through the design and implementation o f a system to monitor user activity throughout a logged-in session. The effectiveness of this system is evaluated through a series of trials investigating the use of keystroke analysis using digraph, trigraph and keyword-based metrics (with the latter two methods representing novel approaches to the analysis of keystroke data). The initial trials demonstrate the viability of these techniques, whereas later trials are used to demonstrate the potential for a composite approach. The final trial described in this thesis was conducted over a three-month period with 35 trial participants and resulted in over five million samples. Due to the scope, duration, and the volume of data collected, this trial provides a significant contribution to the domain, with the use of a composite analysis method representing entirely new work. The results of these trials show that the technique of keystroke analysis is one that can be effective for the majority of users. Finally, a prototype composite authentication and response system is presented, which demonstrates how transparent, non-intrusive, continuous user authentication can be achieved.

Page generated in 0.0517 seconds