• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Network traffic measurement for the next generation Internet

Pezaros, D. January 2005 (has links)
Measurement-based performance evaluation of network traffic is a fundamental prerequisite for the provisioning of managed and controlled services in short timescales, as well as for enabling the accountability of network resources. The steady introduction and deployment of the Internet Protocol Next Generation (IPNG-IPv6) promises a network address space that can accommodate any device capable of generating a digital heart-beat. Under such a ubiquitous communication environment, Internet traffic measurement becomes of particular importance, especially for the assured provisioning of differentiated levels of service quality to the different application flows. The non-identical response of flows to the different types of network-imposed performance degradation and the foreseeable expansion of networked devices raise the need for ubiquitous measurement mechanisms that can be equally applicable to different applications and transports. This thesis introduces a new measurement technique that exploits native features of IPv6 to become an integral part of the Internet's operation, and to provide intrinsic support for performance measurements at the universally-present network layer. IPv6 Extension Headers have been used to carry both the triggers that invoke the measurement activity and the instantaneous measurement indicators in-line with the payload data itself, providing a high level of confidence that the behaviour of the real user traffic flows is observed. The in-line measurements mechanism has been critically compared and contrasted to existing measurement techniques, and its design and a software-based prototype implementation have been documented. The developed system has been used to provisionally evaluate numerous performance properties of a diverse set of application flows, over different-capacity IPv6 experimental configurations. Through experimentation and theoretical argumentation, it has been shown that IPv6-based, in-line measurements can form the basis for accurate and low-overhead performance assessment of network traffic flows in short time-scales, by being dynamically deployed where and when required in a multi-service Internet environment.
342

A formal approach to modelling and verification of context-aware systems

Ul-Haque, Hafiz Mahfooz January 2017 (has links)
The evolution of smart devices and software technologies has expanded the domain of computing from workplaces to other areas of our everyday life. This trend has been rapidly advancing towards ubiquitous computing environments, where smart devices play an important role in acting intelligently on behalf of the users. One of the sub fields of the ubiquitous computing is context-aware systems. In context-aware systems research, ontology and agent-based technology have emerged as a new paradigm for conceptualizing, designing, and implementing sophisticated software systems. These systems exhibit complex adaptive behaviors, run in highly decentralized environment and can naturally be implemented as agent-based systems. Usually context-aware systems run on tiny resource-bounded devices including smart phones and sensor nodes and hence face various challenges. The lack of formal frameworks in existing research presents a clear challenge to model and verify such systems. This thesis addresses some of these issues by developing formal logical frameworks for modelling and verifying rule-based context-aware multi-agent systems. Two logical frameworks LOCRS and LDROCS have been developed by extending CTL* with belief and communication modalities, which allow us to describe a set of rule-based context-aware reasoning agents with bound on time, memory and communication. The key idea underlying the logical approach of context-aware systems is to define a formal logic that axiomatizes the set of transition systems, and it is then used to state various qualitative and quantitative properties of the systems. The set of rules which are used to model a desired system is derived from OWL 2 RL ontologies. While LOCRS is based on monotonic reasoning where beliefs of an agent cannot be revised based on some contradictory evidence, the LDROCS logic handles inconsistent context information using non-monotonic reasoning. The modelling and verification of a healthcare case study is illustrated using Protégé IDE and Maude LTL model checker.
343

Identifying and mitigating security risks in multi-level systems-of-systems environments

Lever, K. E. January 2018 (has links)
In recent years, organisations, governments, and cities have taken advantage of the many benefits and automated processes Information and Communication Technology (ICT) offers, evolving their existing systems and infrastructures into highly connected and complex Systems-of-Systems (SoS). These infrastructures endeavour to increase robustness and offer some resilience against single points of failure. The Internet, Wireless Sensor Networks, the Internet of Things, critical infrastructures, the human body, etc., can all be broadly categorised as SoS, as they encompass a wide range of differing systems that collaborate to fulfil objectives that the distinct systems could not fulfil on their own. ICT constructed SoS face the same dangers, limitations, and challenges as those of traditional cyber based networks, and while monitoring the security of small networks can be difficult, the dynamic nature, size, and complexity of SoS makes securing these infrastructures more taxing. Solutions that attempt to identify risks, vulnerabilities, and model the topologies of SoS have failed to evolve at the same pace as SoS adoption. This has resulted in attacks against these infrastructures gaining prevalence, as unidentified vulnerabilities and exploits provide unguarded opportunities for attackers to exploit. In addition, the new collaborative relations introduce new cyber interdependencies, unforeseen cascading failures, and increase complexity. This thesis presents an innovative approach to identifying, mitigating risks, and securing SoS environments. Our security framework incorporates a number of novel techniques, which allows us to calculate the security level of the entire SoS infrastructure using vulnerability analysis, node property aspects, topology data, and other factors, and to improve and mitigate risks without adding additional resources into the SoS infrastructure. Other risk factors we examine include risks associated with different properties, and the likelihood of violating access control requirements. Extending the principals of the framework, we also apply the approach to multi-level SoS, in order to improve both SoS security and the overall robustness of the network. In addition, the identified risks, vulnerabilities, and interdependent links are modelled by extending network modelling and attack graph generation methods. The proposed SeCurity Risk Analysis and Mitigation Framework and principal techniques have been researched, developed, implemented, and then evaluated via numerous experiments and case studies. The subsequent results accomplished ascertain that the framework can successfully observe SoS and produce an accurate security level for the entire SoS in all instances, visualising identified vulnerabilities, interdependencies, high risk nodes, data access violations, and security grades in a series of reports and undirected graphs. The framework’s evolutionary approach to mitigating risks and the robustness function which can determine the appropriateness of the SoS, revealed promising results, with the framework and principal techniques identifying SoS topologies, and quantifying their associated security levels. Distinguishing SoS that are either optimally structured (in terms of communication security), or cannot be evolved as the applied processes would negatively impede the security and robustness of the SoS. Likewise, the framework is capable via evolvement methods of identifying SoS communication configurations that improve communication security and assure data as it traverses across an unsecure and unencrypted SoS. Reporting enhanced SoS configurations that mitigate risks in a series of undirected graphs and reports that visualise and detail the SoS topology and its vulnerabilities. These reported candidates and optimal solutions improve the security and SoS robustness, and will support the maintenance of acceptable and tolerable low centrality factors, should these recommended configurations be applied to the evaluated SoS infrastructure.
344

Intrusion prediction system for cloud computing and network based systems

Abdlhamed, M. January 2018 (has links)
Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method.
345

Efficient runtime security system for decentralised distributed systems

Thulnoon, A. A. T. January 2018 (has links)
Distributed systems can be defined as systems that are scattered over geographical distances and provide different activities through communication, processing, data transfer and so on. Thus, increasing the cooperation, efficiency, and reliability to deal with users and data resources jointly. For this reason, distributed systems have been shown to be a promising infrastructure for most applications in the digital world. Despite their advantages, keeping these systems secure, is a complex task because of the unconventional nature of distributed systems which can produce many security problems like phishing, denial of services or eavesdropping. Therefore, adopting security and privacy policies in distributed systems will increase the trustworthiness between the users and these systems. However, adding or updating security is considered one of the most challenging concerns and this relies on various security vulnerabilities which existing in distributed systems. The most significant one is inserting or modifying a new security concern or even removing it according to the security status which may appear at runtime. Moreover, these problems will be exacerbated when the system adopts the multi-hop concept as a way to deal with transmitting and processing information. This can pose many significant security challenges especially if dealing with decentralized distributed systems and the security must be furnished as end-to-end. Unfortunately, existing solutions are insufficient to deal with these problems like CORBA which is considered a one-to-one relationship only, or DSAW which deals with end-to-end security but without taking into account the possibility of changing information sensitivity during runtime. This thesis provides a proposed mechanism for enforcing security policies and dealing with distributed systems’ security weakness in term of the software perspective. The proposed solution utilised Aspect-Oriented Programming (AOP), to address security concerns during compilation and running time. The proposed solution is based on a decentralized distributed system that adopts the multi-hop concept to deal with different requested tasks. The proposed system focused on how to achieve high accuracy, data integrity and high efficiency of the distributed system in real time. This is done through modularising the most efficient security solutions, Access Control and Cryptography, by using Aspect-Oriented Programming language. The experiments’ results show the proposed solution overcomes the shortage of the existing solutions by fully integrating with the decentralized distributed system to achieve dynamic, high cooperation, high performance and end-to-end holistic security.
346

Developing an advanced IPv6 evasion attack detection framework

Tajdini, M. January 2018 (has links)
Internet Protocol Version 6 (IPv6) is the most recent generation of Internet protocol. The transition from the current Internet Version 4 (IPv4) to IPv6 raised new issues and the most crucial issue is security vulnerabilities. Most vulnerabilities are common between IPv4 and IPv6, e.g. Evasion attack, Distributed Denial of Service (DDOS) and Fragmentation attack. According to the IPv6 RFC (Request for Comment) recommendations, there are potential attacks against various Operating Systems. Discrepancies between the behaviour of several Operating Systems can lead to Intrusion Detection System (IDS) evasion, Firewall evasion, Operating System fingerprint, Network Mapping, DoS/DDoS attack and Remote code execution attack. We investigated some of the security issues on IPv6 by reviewing existing solutions and methods and performed tests on two open source Network Intrusion Detection Systems (NIDSs) which are Snort and Suricata against some of IPv6 evasions and attack methods. The results show that both NIDSs are unable to detect most of the methods that are used to evade detection. This thesis presents a detection framework specifically developed for IPv6 network to detect evasion, insertion and DoS attacks when using IPv6 Extension Headers and Fragmentation. We implemented the proposed theoretical solution into a proposed framework for evaluation tests. To develop the framework, "dpkt" module is employed to capture and decode the packet. During the development phase, a bug on the module used to parse/decode packets has been found and a patch provided for the module to decode the IPv6 packet correctly. The standard unpack function included in the "ip6" section of the "dpkt" package follows extension headers which means following its parsing, one has no access to all the extension headers in their original order. By defining, a new field called all_extension_headers and adding each header to it before it is moved along allows us to have access to all the extension headers while keeping the original parse speed of the framework virtually untouched. The extra memory footprint from this is also negligible as it will be a linear fraction of the size of the whole set of packet. By decoding the packet, extracting data from packet and evaluating the data with user-defined value, the proposed framework is able to detect IPv6 Evasion, Insertion and DoS attacks. The proposed framework consists of four layers. The first layer captures the network traffic and passes it to second layer for packet decoding which is the most important part of the detection process. It is because, if NIDS could not decode and extract the packet content, it would not be able to pass correct information into the Detection Engine process for detection. Once the packet has been decoded by the decoding process, the decoded packet will be sent to the third layer which is the brain of the proposed solution to make a decision by evaluating the information with the defined value to see whether the packet is threatened or not. This layer is called the Detection Engine. Once the packet(s) has been examined by detection processes, the result will be sent to output layer. If the packet matches with a type or signature that system admin chose, it raises an alarm and automatically logs all details of the packet and saves it for system admin for further investigation. We evaluated the proposed framework and its subsequent process via numerous experiments. The results of these conclude that the proposed framework, called NOPO framework, is able to offer better detection in terms of accuracy, with a more accurate packet decoding process, and reduced resources usage compared to both exciting NIDs.
347

A machine learning approach for smart computer security audit

Pozdniakov, K. January 2017 (has links)
This thesis presents a novel application of machine learning technology to automate network security audit and penetration testing processes in particular. A model-free reinforcement learning approach is presented. It is characterized by the absence of the environmental model. The model is derived autonomously by the audit system while acting in the tested computer network. The penetration testing process is specified as a Markov decision process (MDP) without definition of reward and transition functions for every state/action pair. The presented approach includes application of traditional and modified Q-learning algorithms. A traditional Q-learning algorithm learns the action-value function stored in the table, which gives the expected utility of executing a particular action in a particular state of the penetration testing process. The modified Q-learning algorithm differs by incorporation of the state space approximator and representation of the action-value function as a linear combination of features. Two deep architectures of the approximator are presented: autoencoder joint with artificial neural network (ANN) and autoencoder joint with recurrent neural network (RNN). The autoencoder is used to derive the feature set defining audited hosts. ANN is intended to approximate the state space of the audit process based on derived features. RNN is a more advanced version of the approximator and differs by the existence of the additional loop connections from hidden to input layers of the neural network. Such architecture incorporates previously executed actions into new inputs. It gives the opportunity to audit system learn sequences of actions leading to the goal of the audit, which is defined as receiving administrator rights on the host. The model-free reinforcement learning approach based on traditional Q-learning algorithms was also applied to reveal new vulnerabilities, buffer overflow in particular. The penetration testing system showed the ability to discover a string, exploiting potential vulnerability, by learning its formation process on the go. In order to prove the concept and to test the efficiency of an approach, audit tool was developed. Presented results are intended to demonstrate the adaptivity of the approach, performance of the algorithms and deep machine learning architectures. Different sets of hyperparameters are compared graphically to test the ability of convergence to the optimal action policy. An action policy is a sequence of actions, leading to the audit goal (getting admin rights on the remote host). The testing environment is also presented. It consists of 80+ virtual machines based on a vSphere virtualization platform. This combination of hosts represents a typical corporate network with Users segment, Demilitarized zone (DMZ) and external segment (Internet). The network has typical corporate services available: web server, mail server, file server, SSH, SQL server. During the testing process, the audit system acts as an attacker from the Internet.
348

Data-independent vs. data-dependent dimension reduction for pattern recognition in high dimensional spaces

Hassan, Tahir Mohammed January 2017 (has links)
There has been a rapid emergence of new pattern recognition/classification techniques in a variety of real world applications over the last few decades. In most of the pattern recognition/classification applications, the pattern of interest is modelled by a data vector/array of very high dimension. The main challenges in such applications are related to the efficiency of retrieval, analysis, and verifying/classifying the pattern/object of interest. The “Curse of Dimension” is a reference to these challenges and is commonly addressed by Dimension Reduction (DR) techniques. Several DR techniques has been developed and implemented in a variety of applications. The most common DR schemes are dependent on a dataset of “typical samples” (e.g. the Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA)). However, data-independent DR schemes (e.g. Discrete Wavelet Transform (DWT), and Random Projections (RP)) are becoming more desirable due to lack of density ratio of samples to dimension. In this thesis, we critically review both types of techniques, and highlight advantages and disadvantages in terms of efficiency and impact on recognition accuracy. We shall study the theoretical justification for the existence of DR transforms that preserve, within tolerable error, distances between would be feature vectors modelling objects of interest. We observe that data-dependent DRs do not specifically attempts to preserve distances, and the problems of overfitting and biasness are consequences of low density ratio of samples to dimension. Accordingly, the focus of our investigations is more on data-independent DR schemes and in particular on the different ways of generating RPs as an efficient DR tool. RPs suitable for pattern recognition applications are only restricted by a lower bound on the reduced dimension that depends on the tolerable error. Besides, the known RPs that are generated in accordance to some probability distributions, we investigate and test the performance of differently constructed over-complete Hadamard mxn (m<<n) submatrices, using the inductive Sylvester and Walsh-Paley methods. Our experimental work conducted for 2 case studies (Speech Emotion Recognition (SER) and Gait-based Gender Classification (GBGC)) demonstrate that these matrices perform as well, if not better, than data-dependent DR schemes. Moreover, dictionaries obtained by sampling the top rows of Walsh Paley matrices outperform matrices constructed more randomly but this may be influenced by the type of biometric and/or recognition schemes. We shall, also propose the feature-block (FB) based DR as an innovative way to overcome the problem of low density ratio applications and demonstrate its success for the SER case study.
349

Parallel execution of logic programs.

January 1988 (has links)
Ho-Fung Leung. / Thesis (M.Ph.)--Chinese University of Hong Kong, 1988. / Bibliography: leaves [2-6], 3rd group.
350

Alternately-twisted cube as an interconnection network.

January 1991 (has links)
by Wong Yiu Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Bibliography: leaves [100]-[101] / Acknowledgement / Abstract / Chapter 1. --- Introduction --- p.1-1 / Chapter 2. --- Alternately-Twisted Cube: Definition & Graph-Theoretic Properties --- p.2-1 / Chapter 2.1. --- Construction --- p.2-1 / Chapter 2.2. --- Topological Properties --- p.2-12 / Chapter 2.2.1. --- "Node Degree, Link Count & Diameter" --- p.2-12 / Chapter 2.2.2. --- Node Symmetry --- p.2-13 / Chapter 2.2.3. --- Sub cube Partitioning --- p.2-18 / Chapter 2.2.4. --- Distinct Paths --- p.2-23 / Chapter 2.2.5. --- Embedding other networks --- p.2-24 / Chapter 2.2.5.1. --- Rings --- p.2-25 / Chapter 2.2.5.2. --- Grids --- p.2-29 / Chapter 2.2.5.3. --- Binary Trees --- p.2-35 / Chapter 2.2.5.4. --- Hypercubes --- p.2-42 / Chapter 2.2.6. --- Summary of Comparison with the Hypercube --- p.2-44 / Chapter 3. --- Network Properties --- p.3-1 / Chapter 3.1. --- Routing Algorithms --- p.3-1 / Chapter 3.2. --- Message Transmission: Static Analysis --- p.3-5 / Chapter 3.3. --- Message Transmission: Dynamic Analysis --- p.3-13 / Chapter 3.4. --- Broadcasting --- p.3-17 / Chapter 4. --- Parallel Processing on the Alternately-Twisted Cube --- p.4-1 / Chapter 4.1. --- Ascend/Descend class algorithms --- p.4-1 / Chapter 4.2. --- Combining class algorithms --- p.4-7 / Chapter 4.3. --- Numerical algorithms --- p.4-8 / Chapter 5. --- "Summary, Comparison & Conclusion" --- p.5-1 / Chapter 5.1. --- Summary --- p.5-1 / Chapter 5.2. --- Comparison with other hypercube-like networks --- p.5-2 / Chapter 5.3. --- Conclusion --- p.5-7 / Chapter 5.4. --- Possible future research --- p.5-7 / Bibliography

Page generated in 0.1237 seconds