• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Intrusion prediction system for cloud computing and network based systems

Abdlhamed, M. January 2018 (has links)
Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method.
192

Efficient runtime security system for decentralised distributed systems

Thulnoon, A. A. T. January 2018 (has links)
Distributed systems can be defined as systems that are scattered over geographical distances and provide different activities through communication, processing, data transfer and so on. Thus, increasing the cooperation, efficiency, and reliability to deal with users and data resources jointly. For this reason, distributed systems have been shown to be a promising infrastructure for most applications in the digital world. Despite their advantages, keeping these systems secure, is a complex task because of the unconventional nature of distributed systems which can produce many security problems like phishing, denial of services or eavesdropping. Therefore, adopting security and privacy policies in distributed systems will increase the trustworthiness between the users and these systems. However, adding or updating security is considered one of the most challenging concerns and this relies on various security vulnerabilities which existing in distributed systems. The most significant one is inserting or modifying a new security concern or even removing it according to the security status which may appear at runtime. Moreover, these problems will be exacerbated when the system adopts the multi-hop concept as a way to deal with transmitting and processing information. This can pose many significant security challenges especially if dealing with decentralized distributed systems and the security must be furnished as end-to-end. Unfortunately, existing solutions are insufficient to deal with these problems like CORBA which is considered a one-to-one relationship only, or DSAW which deals with end-to-end security but without taking into account the possibility of changing information sensitivity during runtime. This thesis provides a proposed mechanism for enforcing security policies and dealing with distributed systems’ security weakness in term of the software perspective. The proposed solution utilised Aspect-Oriented Programming (AOP), to address security concerns during compilation and running time. The proposed solution is based on a decentralized distributed system that adopts the multi-hop concept to deal with different requested tasks. The proposed system focused on how to achieve high accuracy, data integrity and high efficiency of the distributed system in real time. This is done through modularising the most efficient security solutions, Access Control and Cryptography, by using Aspect-Oriented Programming language. The experiments’ results show the proposed solution overcomes the shortage of the existing solutions by fully integrating with the decentralized distributed system to achieve dynamic, high cooperation, high performance and end-to-end holistic security.
193

Developing an advanced IPv6 evasion attack detection framework

Tajdini, M. January 2018 (has links)
Internet Protocol Version 6 (IPv6) is the most recent generation of Internet protocol. The transition from the current Internet Version 4 (IPv4) to IPv6 raised new issues and the most crucial issue is security vulnerabilities. Most vulnerabilities are common between IPv4 and IPv6, e.g. Evasion attack, Distributed Denial of Service (DDOS) and Fragmentation attack. According to the IPv6 RFC (Request for Comment) recommendations, there are potential attacks against various Operating Systems. Discrepancies between the behaviour of several Operating Systems can lead to Intrusion Detection System (IDS) evasion, Firewall evasion, Operating System fingerprint, Network Mapping, DoS/DDoS attack and Remote code execution attack. We investigated some of the security issues on IPv6 by reviewing existing solutions and methods and performed tests on two open source Network Intrusion Detection Systems (NIDSs) which are Snort and Suricata against some of IPv6 evasions and attack methods. The results show that both NIDSs are unable to detect most of the methods that are used to evade detection. This thesis presents a detection framework specifically developed for IPv6 network to detect evasion, insertion and DoS attacks when using IPv6 Extension Headers and Fragmentation. We implemented the proposed theoretical solution into a proposed framework for evaluation tests. To develop the framework, "dpkt" module is employed to capture and decode the packet. During the development phase, a bug on the module used to parse/decode packets has been found and a patch provided for the module to decode the IPv6 packet correctly. The standard unpack function included in the "ip6" section of the "dpkt" package follows extension headers which means following its parsing, one has no access to all the extension headers in their original order. By defining, a new field called all_extension_headers and adding each header to it before it is moved along allows us to have access to all the extension headers while keeping the original parse speed of the framework virtually untouched. The extra memory footprint from this is also negligible as it will be a linear fraction of the size of the whole set of packet. By decoding the packet, extracting data from packet and evaluating the data with user-defined value, the proposed framework is able to detect IPv6 Evasion, Insertion and DoS attacks. The proposed framework consists of four layers. The first layer captures the network traffic and passes it to second layer for packet decoding which is the most important part of the detection process. It is because, if NIDS could not decode and extract the packet content, it would not be able to pass correct information into the Detection Engine process for detection. Once the packet has been decoded by the decoding process, the decoded packet will be sent to the third layer which is the brain of the proposed solution to make a decision by evaluating the information with the defined value to see whether the packet is threatened or not. This layer is called the Detection Engine. Once the packet(s) has been examined by detection processes, the result will be sent to output layer. If the packet matches with a type or signature that system admin chose, it raises an alarm and automatically logs all details of the packet and saves it for system admin for further investigation. We evaluated the proposed framework and its subsequent process via numerous experiments. The results of these conclude that the proposed framework, called NOPO framework, is able to offer better detection in terms of accuracy, with a more accurate packet decoding process, and reduced resources usage compared to both exciting NIDs.
194

A machine learning approach for smart computer security audit

Pozdniakov, K. January 2017 (has links)
This thesis presents a novel application of machine learning technology to automate network security audit and penetration testing processes in particular. A model-free reinforcement learning approach is presented. It is characterized by the absence of the environmental model. The model is derived autonomously by the audit system while acting in the tested computer network. The penetration testing process is specified as a Markov decision process (MDP) without definition of reward and transition functions for every state/action pair. The presented approach includes application of traditional and modified Q-learning algorithms. A traditional Q-learning algorithm learns the action-value function stored in the table, which gives the expected utility of executing a particular action in a particular state of the penetration testing process. The modified Q-learning algorithm differs by incorporation of the state space approximator and representation of the action-value function as a linear combination of features. Two deep architectures of the approximator are presented: autoencoder joint with artificial neural network (ANN) and autoencoder joint with recurrent neural network (RNN). The autoencoder is used to derive the feature set defining audited hosts. ANN is intended to approximate the state space of the audit process based on derived features. RNN is a more advanced version of the approximator and differs by the existence of the additional loop connections from hidden to input layers of the neural network. Such architecture incorporates previously executed actions into new inputs. It gives the opportunity to audit system learn sequences of actions leading to the goal of the audit, which is defined as receiving administrator rights on the host. The model-free reinforcement learning approach based on traditional Q-learning algorithms was also applied to reveal new vulnerabilities, buffer overflow in particular. The penetration testing system showed the ability to discover a string, exploiting potential vulnerability, by learning its formation process on the go. In order to prove the concept and to test the efficiency of an approach, audit tool was developed. Presented results are intended to demonstrate the adaptivity of the approach, performance of the algorithms and deep machine learning architectures. Different sets of hyperparameters are compared graphically to test the ability of convergence to the optimal action policy. An action policy is a sequence of actions, leading to the audit goal (getting admin rights on the remote host). The testing environment is also presented. It consists of 80+ virtual machines based on a vSphere virtualization platform. This combination of hosts represents a typical corporate network with Users segment, Demilitarized zone (DMZ) and external segment (Internet). The network has typical corporate services available: web server, mail server, file server, SSH, SQL server. During the testing process, the audit system acts as an attacker from the Internet.
195

Data-independent vs. data-dependent dimension reduction for pattern recognition in high dimensional spaces

Hassan, Tahir Mohammed January 2017 (has links)
There has been a rapid emergence of new pattern recognition/classification techniques in a variety of real world applications over the last few decades. In most of the pattern recognition/classification applications, the pattern of interest is modelled by a data vector/array of very high dimension. The main challenges in such applications are related to the efficiency of retrieval, analysis, and verifying/classifying the pattern/object of interest. The “Curse of Dimension” is a reference to these challenges and is commonly addressed by Dimension Reduction (DR) techniques. Several DR techniques has been developed and implemented in a variety of applications. The most common DR schemes are dependent on a dataset of “typical samples” (e.g. the Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA)). However, data-independent DR schemes (e.g. Discrete Wavelet Transform (DWT), and Random Projections (RP)) are becoming more desirable due to lack of density ratio of samples to dimension. In this thesis, we critically review both types of techniques, and highlight advantages and disadvantages in terms of efficiency and impact on recognition accuracy. We shall study the theoretical justification for the existence of DR transforms that preserve, within tolerable error, distances between would be feature vectors modelling objects of interest. We observe that data-dependent DRs do not specifically attempts to preserve distances, and the problems of overfitting and biasness are consequences of low density ratio of samples to dimension. Accordingly, the focus of our investigations is more on data-independent DR schemes and in particular on the different ways of generating RPs as an efficient DR tool. RPs suitable for pattern recognition applications are only restricted by a lower bound on the reduced dimension that depends on the tolerable error. Besides, the known RPs that are generated in accordance to some probability distributions, we investigate and test the performance of differently constructed over-complete Hadamard mxn (m<<n) submatrices, using the inductive Sylvester and Walsh-Paley methods. Our experimental work conducted for 2 case studies (Speech Emotion Recognition (SER) and Gait-based Gender Classification (GBGC)) demonstrate that these matrices perform as well, if not better, than data-dependent DR schemes. Moreover, dictionaries obtained by sampling the top rows of Walsh Paley matrices outperform matrices constructed more randomly but this may be influenced by the type of biometric and/or recognition schemes. We shall, also propose the feature-block (FB) based DR as an innovative way to overcome the problem of low density ratio applications and demonstrate its success for the SER case study.
196

Recognition of mathematical handwriting on whiteboards

Sabeghi Saroui, Behrang January 2015 (has links)
Automatic recognition of handwritten mathematics has enjoyed significant improvements in the past decades. In particular, online recognition of mathematical formulae has seen a number of important advancements. However, in reality most mathematics is still taught and developed on regular whiteboards and offline recognition remains an open and challenging task in this area. In this thesis we develop methods to recognise mathematics from static images of handwritten expressions on whiteboards, while leveraging the strength of online recognition systems by transforming offline data into online information. Our approach is based on trajectory recovery techniques, that allow us to reconstruct the actual stroke information necessary for online recognition. To this end we develop a novel recognition process especially designed to deal with whiteboards by prudently extracting information from colour images. To evaluate our methods we use an online recogniser for the recognition task, which is specifically trained for recognition of maths symbols. We present our experiments with varying quality and sources of images. In particular, we have used our approach successfully in a set of experiments using Google Glass for capturing images from whiteboards, in which we achieve highest accuracies of 88.03% and 84.54% for segmentation and recognition of mathematical symbols respectively.
197

Coherent minimisation : aggressive optimisation for symbolic finite state transducers

Al-Zobaidi, Zaid January 2014 (has links)
Automata minimisation is considered as one of the key computational resources that drive the cost of computation. Most of the conventional minimisation techniques are based on the notion of bisimulation to determine equivalent states which can be identified. Although minimisation of automata has been an established topic of research, the optimisation of automata works in constrained environments is a novel idea which we will examine in this dissertation, along with a motivating, non-trivial application to efficient tamper-proof hardware compilation. This thesis introduces a new notion of equivalence, coherent equivalence, between states of a transducer. It is weaker than the usual notions of bisimulation, so it leads to more states being identified as equivalent. This new equivalence relation can be utilised to aggressively optimise transducers by reducing the number of states, a technique which we call coherent minimisation. We note that the coherent minimisation always outperforms the conventional minimisation algorithms. The main result of this thesis is that the coherent minimisation is sound and compositional. In order to support more realistic applications to hardware synthesis, we also introduce a refined model of transducers, which we call symbolic finite states transducers that can model systems which involve very large or infinite data-types.
198

Using current uptime to improve failure detection in peer-to-peer networks

Price, Richard Michael January 2010 (has links)
Peer-to-Peer (P2P) networks share computer resources or services through the exchange of information between participating nodes. These nodes form a virtual network overlay by creating a number of connections with one another. Due to the transient nature of nodes within these systems any connection formed should be monitored and maintained to ensure the routing table is kept up-to-date. Typically P2P networks predefine a fixed keep-alive period, a maximum interval in which connected nodes must exchange messages. If no other message has been sent within this interval then keep-alive messages are exchanged to ensure the corresponding node has not left the system. A fixed periodic interval can be viewed as a centralised, static and deterministic mechanism; maintaining overlays in an predictable, reliable and non-adaptive fashion. Several studies have shown that older peers are more likely to remain in the network longer than their short-lived counterparts. Therefore using the distribution of peer session times and the current age of peers as key attributes, we propose three algorithms which allow connections to extend the interval between successive keep-alive messages based upon the likelihood that a corresponding node will remain in the system. By prioritising keep-alive messages to nodes that are more likely to fail, our algorithms reduce the expected delay between failures occurring and their subsequent detection. Using extensively empirical analysis, we analyse the properties of these algorithms and compare them to the standard periodic approach in unstructured and structured network topologies, using tracedriven simulations based upon measured network data. Furthermore we also investigate the effect of nodes that misreport their age upon our adaptive algorithms and detail an efficient keep-alive algorithm that can adapt to the limitations network address translation devices.
199

Mitigating private key compromise

Yu, Jiangshan January 2016 (has links)
Cryptosystems rely on the assumption that the computer end-points can securely store and use cryptographic keys. Yet, this assumption is rather hard to justify in practice. New software vulnerabilities are discovered every day, and malware is pervasive on mobile devices and desktop PCs. This thesis provides research on how to mitigate private key compromise in three different cases. The first case considers compromised signing keys of certificate authorities in public key infrastructure. To address this problem, we analyse and evaluate existing prominent certificate management systems, and propose a new system called "Distributed and Transparent Key Infrastructure", which is secure even if all service providers collude together. The second case considers the key compromise in secure communication. We develop a simple approach that either guarantees the confidentiality of messages sent to a device even if the device was previously compromised, or allows the user to detect that confidentiality failed. We propose a multi-device messaging protocol that exploits our concept to allow users to detect unauthorised usage of their device keys. The third case considers the key compromise in secret distribution. We develop a self-healing system, which provides a proactive security guarantee: an attacker can learn a secret only if s/he can compromise all servers simultaneously in a short period.
200

Flexible robotic control via co-operation between an operator and an AI-based control system

Chiou, Emmanouil January 2017 (has links)
This thesis addresses the problem of variable autonomy in teleoperated mobile robots. Variable autonomy refers to the approach of incorporating several different levels of autonomous capabilities (Level(s) of Autonomy (LOA)) ranging from pure teleoperation (human has complete control of the robot) to full autonomy (robot has control of every capability), within a single robot. Most robots used for demanding and safety critical tasks (e.g. search and rescue, hazardous environments inspection), are currently teleoperated in simple ways, but could soon start to benefit from variable autonomy. The use of variable autonomy would allow Artificial Intelligence (AI) control algorithms to autonomously take control of certain functions when the human operator is suffering a high workload, high cognitive load, anxiety, or other distractions and stresses. In contrast, some circumstances may still necessitate direct human control of the robot. More specifically, this thesis is focused on investigating the issues of dynamically changing LOA (i.e. during task execution) using either Human-Initiative (HI) orMixed-Initiative (MI) control. MI refers to the peer-to-peer relationship between the robot and the operator in terms of the authority to initiate actions and LOA switches. HI refers to the human operators switching LOA based on their judgment, with the robot having no capacity to initiate LOA switches. A HI and a novel expert-guided MI controller are presented in this thesis. These controllers were evaluated using a multidisciplinary systematic experimental framework, that combines quantifiable and repeatable performance degradation factors for both the robot and the operator. The thesis presents statistically validated evidence that variable autonomy, in the form of HI and MI, provides advantages compared to only using teleoperation or only using autonomy, in various scenarios. Lastly, analyses of the interactions between the operators and the variable autonomy systems are reported. These analyses highlight the importance of personality traits and preferences, trust in the system, and the understanding of the system by the human operator, in the context of HRI with the proposed controllers.

Page generated in 0.0965 seconds