• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 45
  • 29
  • Tagged with
  • 381
  • 105
  • 79
  • 43
  • 37
  • 33
  • 29
  • 29
  • 25
  • 23
  • 22
  • 22
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Classifying and responding to network intrusions

Papadaki, Maria January 2004 (has links)
Intrusion detection systems (IDS) have been widely adopted within the IT community, as passive monitoring tools that report security related problems to system administrators. However, the increasing number and evolving complexity of attacks, along with the growth and complexity of networking infrastructures, has led to overwhelming numbers of IDS alerts, which allow significantly smaller timeframe for a human to respond. The need for automated response is therefore very much evident. However, the adoption of such approaches has been constrained by practical limitations and administrators' consequent mistrust of systems' abilities to issue appropriate responses. The thesis presents a thorough analysis of the problem of intrusions, and identifies false alarms as the main obstacle to the adoption of automated response. A critical examination of existing automated response systems is provided, along with a discussion of why a new solution is needed. The thesis determines that, while the detection capabilities remain imperfect, the problem of false alarms cannot be eliminated. Automated response technology must take this into account, and instead focus upon avoiding the disruption of legitimate users and services in such scenarios. The overall aim of the research has therefore been to enhance the automated response process, by considering the context of an attack, and investigate and evaluate a means of making intelligent response decisions. The realisation of this objective has included the formulation of a response-oriented taxonomy of intrusions, which is used as a basis to systematically study intrusions and understand the threats detected by an IDS. From this foundation, a novel Flexible Automated and Intelligent Responder (FAIR) architecture has been designed, as the basis from which flexible and escalating levels of response are offered, according to the context of an attack. The thesis describes the design and operation of the architecture, focusing upon the contextual factors influencing the response process, and the way they are measured and assessed to formulate response decisions. The architecture is underpinned by the use of response policies which provide a means to reflect the changing needs and characteristics of organisations. The main concepts of the new architecture were validated via a proof-of-concept prototype system. A series of test scenarios were used to demonstrate how the context of an attack can influence the response decisions, and how the response policies can be customised and used to enable intelligent decisions. This helped to prove that the concept of flexible automated response is indeed viable, and that the research has provided a suitable contribution to knowledge in this important domain.
82

Active security vulnerability notification and resolution

Alayed, Abdulaziz Ibrahim January 2006 (has links)
The early version of the Internet was designed for connectivity only, without the consideration of security, and the Internet is consequently an open structure. Networked systems are vulnerable for a number of reasons; design error, implementation, and management. A vulnerability is a hole or weak point that can be exploited to compromise the security of the system. Operating systems and applications are often vulnerable because of design errors. Software vendors release patches for discovered vulnerabilities, and rely upon system administrators to accept and install patches on their systems. Many system administrators fail to install patches on time, and consequently leave their systems vulnerable to exploitation by hackers. This exploitation can result in various security breaches, including website defacement, denial of service, or malware attacks. The overall problem is significant with an average of 115 vulnerabilities per week being documented during 2005. This thesis considers the problem of vulnerabilities in IT networked systems, and maps the vulnerability types into a technical taxonomy. The thesis presents a thorough analysis of the existing methods of vulnerability management which determine that these methods have failed to mange the problem in a comprehensive way, and show the need for a comprehensive management system, capable of addressing the awareness and patch deploymentp roblems. A critical examination of vulnerability databasess tatistics over the past few years is provided, together with a benchmarking of the problem in a reference environment with a discussion of why a new approach is needed. The research examined and compared different vulnerability advisories, and proposed a generic vulnerability format towards automating the notification process. The thesis identifies the standard process of addressing vulnerabilities and the over reliance upon the manual method. An automated management system must take into account new vulnerabilities and patch deploymentt o provide a comprehensives olution. The overall aim of the research has therefore been to design a new framework to address these flaws in the networked systems harmonised with the standard system administrator process. The approach, known as AVMS (Automated Vulnerability Management System), is capable of filtering and prioritising the relevant messages, and then downloading the associated patches and deploying them to the required machines. The framework is validated through a proof-of-concept prototype system. A series of tests involving different advisories are used to illustrate how AVMS would behave. This helped to prove that the automated vulnerability management system prototype is indeed viable, and that the research has provided a suitable contribution to knowledge in this important domain.
83

Arguing security : a framework for analyzing security requirements

Haley, Charles B. January 2007 (has links)
No description available.
84

Anonymity and trust in the electronic world

Chowdhury, Partha Das January 2005 (has links)
Privacy has never been an explicit goal of authorization mechanisms. The traditional approach to authorisation relies on strong authentication of a stable identity using long term credentials. Audit is then linked to authorization via the same identity. Such an approach compels users to enter into a trust relationship with large parts of the system infrastructure, including entities in remote domains. In this dissertation we advance the view that this type of compulsive trust relationship is unnecessary and can have undesirable consequences. We examine in some detail the consequences which such undesirable trust relationships can have on individual privacy, and investigate the extent to which taking a unified approach to trust and anonymity can actually provide useful leverage to address threats to privacy without compromising the principal goals of authentication and audit. We conclude that many applications would benefit from mechanisms which enabled them to make authorization decisions without using long-term credentials. We next propose specific mechanisms to achieve this, introducing a novel notion of a short-lived electronic identity, which we call a surrogate. This approach allows a localisation of trust and entities are not compelled to transitively trust other entities in remote domains. In particular, resolution of stable identities needs only ever to be done locally to the entity named. Our surrogates allow delegation, enable role-based access control policies to be enforced across multiple domains, and permit the use of non-anonymous payment mechanisms, all without compromising the privacy of a user. The localisation of trust resulting from the approach proposed in this dissertation also has the potential to allow clients to control the risks to which they are exposed by bearing the cost of relevant countermeasures themselves, rather than forcing clients to trust the system infrastructure to protect them and to bear an equal share of the cost of all countermeasures whether or not effective for them. This consideration means that our surrogate-based approach and mechanisms are of interest even in Kerberos-like scenarios where anonymity is not a requirement, but the remote authentication mechanism is untrustworthy.
85

Typed static analysis for concurrent, policy-based, resource access control

Nguyen, Nicholas January 2006 (has links)
No description available.
86

Overlay networks for the defence of DDOS

Ellis, David Mark January 2007 (has links)
No description available.
87

Certificate validation in untrusted domains

Batarfi, Omar Abdullah January 2007 (has links)
Authentication is a vital part of establishing secure, online transactions and Public key Infrastructure (PKI) plays a crucial role in this process for a relying party. A PKI certificate provides proof of identity for a subject and it inherits its trustworthiness from the fact that its issuer is a known (trusted) Certification Authority (CA) that vouches for the binding between a public key and a subject's identity. Certificate Policies (CPs) are the regulations recognized by PKI participants and they are used as a basis for the evaluation of the trust embodied in PKI certificates. However, CPs are written in natural language which can lead to ambiguities, spelling errors, and a lack of consistency when describing the policies. This makes it difficult to perform comparison between different CPs. This thesis offers a solution to the problems that arise when there is not a trusted CA to vouch for the trust embodied in a certificate. With the worldwide, increasing number of online transactions over Internet, it has highly desirable to find a method for authenticating subjects in untrusted domains. The process of formalisation for CPs described in this thesis allows their semantics to be described. The formalisation relies on the XML language for describing the structure of the CP and the formalization process passes through three stages with the outcome of the last stage being 27 applicable criteria. These criteria become a tool assisting a relying party to decide the level of trust that he/she can place on a subject certificate. The criteria are applied to the CP of the issuer of the subject certificate. To test their validity, the criteria developed have been examined against the UNCITRAL Model Law for Electronic Signatures and they are able to handle the articles of the UNCITRAL law. Finally, a case study is conducted in order to show the applicability of the criteria. A real CPs have been used to prove their applicability and convergence. This shows that the criteria can handle the correspondence activities defined in a real CPs adequately.
88

Using a loadtime metaobject protocol to enforce access control policies upon user-level compiled code

Welch, Ian Shawn January 2005 (has links)
This thesis evaluates the use of a loadtime metaobject protocol as a practical mechanism for enforcing access control policies upon applications distributed as user-level compiled code. Enforcing access control policies upon user-level compiled code is necessary because there are many situations where users are vulnerable to security breaches because they download and run potentially untrustworthy applications provided in the form of user-level compiled code. These applications might be distributed applications so access control for both local and distributed resources is required. Examples of potentially untrustworthy applications are Browser plug-ins, software patches, new applications, or Internet computing applications such as SETI@home. Even applications from trusted sources might be malicious or simply contain bugs that can be exploited by attackers so access control policies must be imposed to prevent the misuse of resources. Additionally, system administrators might wish to enforce access control policies upon these applications to ensure that users use them in accordance with local security requirements. Unfortunately, applications developed externally may not include the necessary enforcement code to allow the specification of organisation-specific access control policies. Operating system security mechanisms are too coarse-grained to enforce security policies on applications implemented as user-level code. Mechanisms that control access to both user-level and operating system-level resources are required for access control policies but operating system mechanisms only focus on controlling access to system-level objects. Conventional object-oriented software engineering can be used to use existing security architectures to enforce access control on user-level resources as well as system-resources. Common techniques are to insert enforcement within libraries or applications, use inheritance and proxies. However, these all provide a poor separation of concerns and cannot be used with compiled code. In-lined reference monitors provide a good separation of concerns and meet criteria for good security engineering. They use object code rewriting to control access to both userlevel and system-level objects by in-lining reference monitor code into user-level compiled code. However, their focus is upon replacing existing security architectures and current implementations do not address distributed access control policies. Another approach that does provide a good separation of concerns and allows reuse of existing security architectures are metaobject protocols. These allow constrained changes to be made to the semantics of code and therefore can be used to implement access control policies for both local and distributed resources. Loadtime metaobject protocols allow metaobject protocols to be used with compiled code because they rewrite base level classes and insert meta-level interceptions. However, these have not been demonstrated to meet requirements for good security engineering such as complete mediation. Also current implementations do not provide distributed access control. This thesis implements a loadtime metaobject protocol for the Java programming language. The design of the metaobject protocol specifically addresses separation of concerns, least privilege, complete mediation and economy of mechanism. The implementation of the metaobject protocol, called Kava, has been evaluated by implementing diverse security policies in two case studies involving third-party standalone and distributed applications. These case studies are used as the basis of inferences about general suitability of using loadtime reflection for enforcing access control policies upon user-level compiled code.
89

Design and implementation of extensible middleware for non-repudiable interactions

Robinson, Paul Fletcher January 2006 (has links)
Non-repudiation is an aspect of security that is concerned with the creation of irrefutable audits of an interaction. Ensuring the audit is irrefutable and verifiable by a third party is not a trivial task. A lot of supporting infrastructure is required which adds large expense to the interaction. This infrastructure comprises, (i) a non-repudiation aware run-time environment, (ii) several purpose built trusted services and (iii) an appropriate non-repudiation protocol. This thesis presents design and implementation of such an infrastructure. The runtime environment makes use of several trusted services to achieve external verification of the audit trail. Non-repudiation is achieved by executing fair non-repudiation protocols. The Fairness property of the non-repudiation protocol allows a participant to protect their own interests by preventing any party from gaining an advantage by misbehaviour. The infrastructure has two novel aspects; extensibility and support for automated implementation of protocols. Extensibility is achieved by implementing the infrastructure in middleware and by presenting a large variety of non-repudiable business interaction patterns to the application (a non-repudiable interaction pattern is a higher level protocol composed from one or more non-repudiation protocols). The middleware is highly configurable allowing new non-repudiation protocols and interaction patterns to be easily added, without disrupting the application. This thesis presents a rigorous mechanism for automated implementation of non-repudiation protocols. This ensures that the protocol being executed is that which was intended and verified by the protocol designer. A family of non-repudiation protocols are taken and inspected. This inspection allows a set of generic finite state machines to be produced. These finite state machines can be used to maintain protocol state and manage the sending and receiving of appropriate protocol messages. A concrete implementation of the run-time environment and the protocol generation techniques is presented. This implementation is based on industry supported Web service standards and services.
90

Security management for services that are integrated across enterprise boundaries

Aljareh, Salem Sultan January 2004 (has links)
This thesis addresses the problem of security management for services that are integrated across enterprise boundaries, as typically found in multi-agency environments. We consider the multi-agency environment as a collaboration network. The Electronic Health Record is a good example of an application in the multi-agency service environment, as there are different authorities claiming rights to access the personal and medical data of a patient. In this thesis we use the Electronic Health Record as the main context. Policies are determined by security goals, goals in turn are determined by regulations and laws. In general goals can be subtle and difficult to formalise, especially across admin boundaries as with the Electronic Health Record. Security problems may result when designers attempt to apply general principles to cases that have subtleties in the full detail. It is vital to understand such subtleties if a robust solution is to be achieved Existing solutions are limited in that they tend only to deal with pre- determined goals and fail to address situations in which the goals need to be negotiated. The task-based approach seems well suited to addressing this. This work is structured in five parts. In the first part we review current declarations, legislation and regulations to bring together a global, European and national perspective for security in health services and we identify requirements. In the second part we investigate a proposed solution for security in the Health Service by examining the BMA (British Medical Association) model. The third part is a development of a novel task-based CTCP ICTRP model based on two linked protocols. The Collaboration Task Creation Protocol (CTCP) establishes a framework for handling a request for information and the Collaboration Task Runtime Protocol (CTRP) runs the request under the supervision of CTCP. In the fourth part we validate the model against the Data Protection Act and the Caldicott Principles and review for technical completeness and satisfaction of software engineering principles. Finally in the fifth part we apply the model to two case studies in the multi- agency environment a simple one (Dynamic Coalition) for illustration purposes and a more complex one (Electronic Health Record) for evaluating the model's coverage, neutrality and focus, and exception handling.

Page generated in 0.0136 seconds