• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Mining intrusion detection alert logs to minimise false positives & gain attack insight

Shittu, Riyanat O. January 2016 (has links)
Utilising Intrusion Detection System (IDS) logs in security event analysis is crucial in the process of assessing, measuring and understanding the security state of a computer network, often defined by its current exposure and resilience to network attacks. Thus, the study of understanding network attacks through event analysis is a fast growing emerging area. In comparison to its first appearance a decade ago, the complexities involved in achieving effective security event analysis have significantly increased. With such increased complexities, advances in security event analytical techniques are required in order to maintain timely mitigation and prediction of network attacks. This thesis focusses on improving the quality of analysing network event logs, particularly intrusion detection logs by exploring alternative analytical methods which overcome some of the complexities involved in security event analysis. This thesis provides four key contributions. Firstly, we explore how the quality of intrusion alert logs can be improved by eliminating the large volume of false positive alerts contained in intrusion detection logs. We investigate probabilistic alert correlation, an alternative to traditional rule based correlation approaches. We hypothesise that probabilistic alert correlation aids in discovering and learning the evolving dependencies between alerts, further revealing attack structures and information which can be vital in eliminating false positives. Our findings showed that the results support our defined hypothesis, aligning consistently with existing literature. In addition, evaluating the model using recent attack datasets (in comparison to outdated datasets used in many research studies) allowed the discovery of a new set of issues relevant to modern security event log analysis which have only been introduced and addressed in few research studies. Secondly, we propose a set of novel prioritisation metrics for the filtering of false positive intrusion alerts using knowledge gained during alert correlation. A combination of heuristic, temporal and anomaly detection measures are used to define metrics which capture characteristics identifiable in common attacks including denial-of-service attacks and worm propagations. The most relevant of the novel metrics, Outmet is based on the well known Local Outlier Factor algorithm. Our findings showed that with a slight trade-off of sensitivity (i.e. true positives performance), outmet reduces false positives significantly. In comparison to prior state-of-the-art, our findings show that it performs more efficiently given a variation of attack scenarios. Thirdly, we extend a well known real-time clustering algorithm, CluStream in order to support the categorisation of attack patterns represented as graph like structures. Our motive behind attack pattern categorisation is to provide automated methods for capturing consistent behavioural patterns across a given class of attacks. To our knowledge, this is a novel approach to intrusion alert analysis. The extension of CluStream resulted is a novel light weight real-time clustering algorithm for graph structures. Our findings are new and complement existing literature. We discovered that in certain case studies, repetitive attack behaviour could be mined. Such a discovery could facilitate the prediction of future attacks. Finally, we acknowledge that due to the intelligence and stealth involved in modern network attacks, automated analytical approaches alone may not suffice in making sense of intrusion detection logs. Thus, we explore visualisation and interactive methods for effective visual analysis which if combined with the automated approaches proposed, would improve the overall results of the analysis. The result of this is a visual analytic framework, integrated and tested in a commercial Cyber Security Event Analysis Software System distributed by British Telecom.
62

Representation decomposition for knowledge extraction and sharing using restricted Boltzmann machines

Tran, Son January 2016 (has links)
Restricted Boltzmann machines (RBMs), with many variations and extensions, are an efficient neural network model that has been applied very successfully recently as a building block for deep networks in diverse areas ranging from language generation to video analysis and speech recognition. Despite their success and the creation of increasingly complex network models and learning algorithms based on RBMs, the question of how knowledge is represented, and could be shared by such networks, has received comparatively little attention. Neural networks are notorious for being difficult to interpret. The area of knowledge extraction addresses this problem by translating network models into symbolic knowledge. Knowledge extraction has been normally applied to feed-forward neural networks trained in supervised fashion using the back-propagation learning algorithm. More recently, research has shown that the use of unsupervised models may improve the performance of network models at learning structures from complex data. In this thesis, we study and evaluate the decomposition of the knowledge encoded by training stacks of RBMs into symbolic knowledge that can offer: (i) a compact representation for recognition tasks; (ii) an intermediate language between hierarchical symbolic knowledge and complex deep networks; (iii) an adaptive transfer learning method for knowledge reuse. These capabilities are the fundamentals of a Learning, Extraction and Sharing (LES) system, which we have developed. In this system learning can automate the process of encoding knowledge from data into an RBM, extraction then translates the knowledge into symbolic form, and sharing allows parts of the knowledge-base to be reused to improve learning in other domains. To this end, in this thesis we introduce confidence rules, which are used to allow the combination of symbolic knowledge and quantitative reasoning. Inspired by Penalty Logic - introduced for Hopfield networks confidence rules establish a relationship between logical rules and RBMs. However, instead of representing propositional well-formed formulas, confidence rules are designed to account for the reasoning of a stack of RBMs, to support modular learning and hierarchical inference. This approach shares common objectives with the work on neural-symbolic cognitive agents. We show in both theory and through empirical evaluations that a hierarchical logic program in the form of a set of confidence rules can be constructed by decomposing representations in an RBM or a deep belief network (DBN). This decomposition is at the core of a new knowledge extraction algorithm which is computationally efficient. The extraction algorithm seeks to benefit from the symbolic knowledge representation that it produces in order to improve network initialisation in the case of transfer learning. To this end, confidence rules o_er a language for encoding symbolic knowledge into a deep network, resulting, as shown empirically in this thesis, in an improvement in modular learning and reasoning. As far as we know this is the first attempt to extract, encode, and transfer symbolic knowledge among DBNs. In a confidence rule, a real value, named confidence value, is associated with a logical implication rule. We show that the logical rules with the highest confidence values can perform similarly to the original networks. We also show that by transferring and encoding representations learned from a domain onto another related or analogous domain, one may improve the performance of representations learned in this other domain. To this end, we introduce a novel algorithm for transfer learning called “Adaptive Profile Transferred Likelihood”, which adapts transferred representations to target domain data. This algorithm is shown to be more effective than the simple combination of transferred representations with the representations learned in the target domain. It is also less sensitive to noise and therefore more robust to deal with the problem of negative transfer.
63

The diagnostic efficacy of JPEG still image compression in three radiological imaging modalities

Patefield, Steven January 2002 (has links)
No description available.
64

A story model of report and work in neuroradiology

Rooksby, J. January 2002 (has links)
No description available.
65

An object layer for conventional file-systems

Wheatman, Martin J. January 1999 (has links)
No description available.
66

Engineer-computer interaction for structural monitoring

Stalker, R. January 2000 (has links)
No description available.
67

A framework for continuous, transparent authentication on mobile devices

Crawford, Heather Anne January 2012 (has links)
Mobile devices have consistently advanced in terms of processing power, amount of memory and functionality. With these advances, the ability to store potentially private or sensitive information on them has increased. Traditional methods for securing mobile devices, passwords and PINs, are inadequate given their weaknesses and the bursty use patterns that characterize mobile devices. Passwords and PINs are often shared or weak secrets to ameliorate the memory load on device owners. Furthermore, they represent point-of-entry security, which provides access control but not authentication. Alternatives to these traditional meth- ods have been suggested. Examples include graphical passwords, biometrics and sketched passwords, among others. These alternatives all have their place in an authentication toolbox, as do passwords and PINs, but do not respect the unique needs of the mobile device environment. This dissertation presents a continuous, transparent authentication method for mobile devices called the Transparent Authentication Framework. The Framework uses behavioral biometrics, which are patterns in how people perform actions, to verify the identity of the mobile device owner. It is transparent in that the biometrics are gathered in the background while the device is used normally, and is continuous in that verification takes place regularly. The Framework requires little effort from the device owner, goes beyond access control to provide authentication, and is acceptable and trustworthy to device owners, all while respecting the memory and processor limitations of the mobile device environment.
68

Quality assessment of service providers in a conformance-centric Service Oriented Architecture

Shercliff, Gareth January 2009 (has links)
In a Service Oriented Architecture (SOA), the goal of consumers is to discover and use services which lead to them experiencing the highest quality, such that their expectations and needs are satisfied. In supporting this discovery, quality assessment tools are required to establish the degree to which these expectations will be met by specific services. Traditional approaches to quality assessment in SOA assume that providers and consumers of services will adopt a performance-centric view of quality, which assumes that consumers will be most satisfied when they receive the highest absolute performance. However, adopting this approach does not consider the subjective nature of quality and will not necessarily lead to consumers receiving services that meet their individual needs. By using existing approaches to quality assessment that assume a consumer's primary goal as being optimisation of performance, consumers in SOA are currently unable to effectively identify and engage with providers who deliver services that will best meet their needs. Developing approaches to assessment that adopt a more conformance-centric view of quality (where it is assumed that consumers are most satisfied when a service meets, but not necessarily exceeds, their individual expectations) is a challenge that must be addressed if consumers are to effectively adopt SOA as a means of accessing services. In addressing the above challenge, this thesis develops a conformance-centric model of an SOA in which conformance is taken to be the primary goal of consumers. This model is holistic, in that it considers consumers, providers and assessment services and their relationship; and novel in that it proposes a set of rational provider behaviours that would be adopted in using a conformance-centric view of quality. Adopting such conformance-centric behaviour leads to observable and predictable patterns in the performance of the services offered by providers, due to the relationship that exists between the level of service delivered by the service and the expectation of the consumer. In order to support consumers in the discovery of high quality services, quality assessment tools must be able to effectively assess past performance information about services, and use this as a prediction of future performance. In supporting consumers within a conformance-centric SOA, this thesis proposes and evaluates a new set of approaches to quality assessment which make use of the patterns in provider behaviour described above. The approaches developed are non-trivial – using a selection of adapted pattern classification and other statistical techniques to infer the behaviour of individual services at run-time and calculating a numerical measure of confidence for each result that can be used by consumers to combine assessment information with other evidence. The quality assessment approaches are evaluated within a software implementation of a conformance-centric SOA, whereby they are shown to lead to consumers experiencing higher quality than with existing performance-centric approaches. By introducing conformance-centric principles into existing real-world SOA, consumers will be able to evaluate and engage with providers that offer services that have been differentiated based on consumer expectation. The benefits of such capability over the current state-of-the-art in SOA are twofold. Firstly, individual consumers will receive higher quality services, and therefore will increase the likelihood of their needs being effectively satisfied. Secondly, the availability of assessment tools which acknowledge the conformance-centric nature of consumers will encourage providers to offer a range of services for consumers with varying expectation, rather than simply offering a single service that aims to delivery maximum performance. This recognition will allow providers to use their resources more efficiently, leading to reduced costs and increased profitability. Such benefits can only be realised by adopting a conformance-centric view of quality across the SOA and by providing assessment services that operate effectively in such environments. This thesis proposes, develops and evaluates models and approaches that enable the achievement of this goal.
69

Model driven certification of Cloud service security based on continuous monitoring

Krotsiani, M. January 2016 (has links)
Cloud Computing technology offers an advanced approach for the provision of infrastructure, platform and software services without the need of extensive cost of owning, operating or maintaining the computational infrastructures required. However, despite being cost effective, this technology has raised concerns regarding the security, privacy and compliance of data or services offered through cloud systems. This is mainly due to the lack of transparency of services to the consumers, or due to the fact that service providers are unwilling to take full responsibility for the security of services that they offer through cloud systems, and accept liability for security breaches [18]. In such circumstances, there is a trust deficiency that needs to be addressed. The potential of certification as a means of addressing the lack of trust regarding the security of different types of services, including the cloud, has been widely recognised [149]. However, the recognition of this potential has not led to a wide adoption, as it was expected. The reason could be that certification has traditionally been carried out through standards and certification schemes (e.g., ISO27001 [149], ISO27002 [149] and Common Criteria [65]), which involve predominantly manual systems for security auditing, testing and inspection processes. Such processes tend to be lengthy and have a significant financial cost, which often prevents small technology vendors from adopting it [87]. In this thesis, we present an automated approach for cloud service certification, where the evidence is gathered through continuous monitoring. This approach can be used to: (a) define and execute automatically certification models, to continuously acquire and analyse evidence regarding the provision of services on cloud infrastructures through continuous monitoring; (b) use this evidence to assess whether the provision is compliant with required security properties; and (c) generate and manage digital certificates to confirm the compliance of services with specific security properties.
70

Systematic analysis and modelling of diagnostic errors in medicine

Guo, Shijing January 2016 (has links)
Diagnostic accuracy is an important index of the quality of health care service. Missed, wrong or delayed diagnosis has a direct effect on patient safety. Diagnostic errors have been discussed at length; however it still lacks a systemic research approach. This thesis takes the diagnostic process as a system and develops a systemic model of diagnostic errors by implementing system dynamics modelling combined with regression analysis. It aims to propose a better way of studying diagnostic errors as well as a deeper understanding of how factors affect the number of possible errors at each step of the diagnostic process and how factors contribute to patient outcomes in the end. It is executed following two parts: In the first part, a qualitative model is developed to demonstrate how errors can happen during the diagnostic process; in other words, the model illustrates the connections among key factors and dependent variables. It starts from discovering key factors of diagnostic errors, producing a hierarchical list of factors, and then illustrates interrelation loops that show how relevant factors are linked with errors. The qualitative model is based on the findings of a systematic literature review and further refined by experts’ reviews. In the second part, a quantitative model is developed to provide system behaviour simulations, which demonstrates the quantitative relations among factors and errors during the diagnostic process. Regression modelling analysis is used to estimate the quantitative relationships among multi factors and their dependent variables during the diagnostic phase of history taking and physical examinations. The regression models are further applied into quantitative system dynamics modelling ‘stock and flow diagrams’. The quantitative model traces error flows during the diagnostic process, and simulates how the change of one or more variables affects the diagnostic errors and patient outcomes over time. The change of the variables may reflect a change in demand from policy or a proposed external intervention. The results suggest the systemic model has the potential to help understand diagnostic errors, observe model behaviours, and provide risk-free simulation experiments for possible strategies.

Page generated in 0.1782 seconds