• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 433
  • 38
  • 35
  • 29
  • 19
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 4
  • 4
  • Tagged with
  • 757
  • 757
  • 464
  • 347
  • 184
  • 182
  • 159
  • 122
  • 112
  • 112
  • 108
  • 103
  • 100
  • 86
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Understanding Flaws in the Deployment and Implementation of Web Encryption

Sivakorn, Suphannee January 2018 (has links)
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been “tested” (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations. In this dissertation, in an effort to improve users’ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a user’s HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the user’s saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the user’s account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks. Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage. Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks.
332

Machine Learning Based User Modeling for Enterprise Security and Privacy Risk Mitigation

Dutta, Preetam Kumar January 2019 (has links)
Modern organizations are faced with a host of security concerns despite advances in security research. The challenges are diverse, ranging from malicious parties to vulnerable hardware. One particularly strong pain point for enterprises is the insider threat detection problem in which an internal employee, current or former, behaves against the interest of the company. Approaches designed to discourage and to prevent insiders are multifaceted, but efforts to detect malicious users typically involves a combination of an active monitoring infrastructure and a User Behavior Analytics (UBA) system, which applies Machine Learning (ML) algorithms to learn user behavior to identify abnormal behaviors indicative of a security violation. The principal problem with the aforementioned approach is the uncertainty regarding how to measure the functionality of an insider threat detection system. The difficulty of research in UBA technology hinges on sparse knowledge about the models utilized and insufficient data to effectively study the problem. Realistic ground truth data is next to impossible to acquire for open research. This dissertation tackles those challenges and asserts that predictive UBA models can be applied to simulate a wide range of user behaviors in situ and can be broadened to examine test regimes of deployed UBA technology (including evasive low and slow malicious behaviors) without disclosing private and sensitive information. Furthermore, the underlying technology presented in this thesis can increase data availability through a combination of generative adversarial networks, which create realistic yet fake data, and the system log files created by the technology itself. Given the commercial viability of UBA technology, academic researchers are oft challenged with the inability to test on widely deployed, proprietary software and thus must rely on standard ML based approaches such as Gaussian Mixture Models (GMMs), Support Vector Machines (SVMs) and Bayesian Networks (BNs) to emulate UBA systems. We begin the dissertation with the introduction and implementation of CovTrain, the first neuron coverage guided training algorithm that improves robustness of Deep Learning (DL) systems. CovTrain is tested on a variety of massive, well-tested datasets and has outperformed standard DL models in terms of both loss and accuracy. We then use it to create an enhanced DL based UBA system used in our formal experimental studies. However, the challenges of measuring and testing a UBA system remain open problems in both academic and commercial communities. With those thoughts in mind, we next present the design, implementation and evaluation of the Bad User Behavior Analytics (BUBA) system, the first framework of its kind to test UBA systems through the iterative introduction of adversarial examples to a UBA system using simulated user bots. The framework's flexibility enables it to tackle an array of problems, including enterprise security at both the system and cloud storage levels. We test BUBA in a synthetic environment with UBA systems that employ state of the art ML models including an enhanced DL model trained using CovTrain and the live Columbia University network. The results show the ability to generate synthetic users that can successfully fool UBA systems at the boundaries. In particular, we find that adjusting the time horizon of a given attack can help it escape UBA detection and in live tests on the Columbia network that SSH attacks could be done without detection if the time parameter is carefully adjusted. We may consider this as an example of Adversarial ML, where temporal test data is modified to evade detection. We then consider a novel extension of BUBA to test cloud storage security in light of the observation that large enterprises are not actively monitoring their cloud storage, for which recent surveys have security personnel fearing that companies are moving to the cloud faster than they can secure it. We believe that there are opportunities to improve cloud storage security, especially given the increasing trend towards cloud utilization. BUBA is intended to reveal the potential security violations and highlight what security mechanisms are needed to prevent significant data loss. In spite of the advances, the development of BUBA underscores yet another difficulty for a researcher in big data analytics for security - a scarcity of data. Insider threat system development requires granular details about the behaviors of the individuals on its local ecosystem in order to discern anomalous patterns or behaviors. Deep Neural Networks (DNNs) have allowed researchers to discover patterns that were never before seen, but mandate large datasets. Thus, systematic data generation through techniques such as Generative Adversarial Networks (GANs) has become ubiquitous in the face of increased data needs for scientific research as was employed in part for BUBA. Through the first legal analysis of its kind, we test the legality of synthetic data for sharing given privacy requirements. An analysis of statutes through different lens helps us determine that synthetic data may be the next, best step for research advancement. We conclude that realistic yet artificially generated data offers a tangible path forward for academic and broader research endeavors, but policy must meet technological advance before general adoption can take place.
333

An architecture for the forensic analysis of Windows system generated artefacts

Hashim, Noor Hayati January 2011 (has links)
Computer forensic tools have been developed to enable forensic investigators to analyse software artefacts to help reconstruct possible scenarios for activity on a particular computer system. A number of these tools allow the examination and analysis of system generated artefacts such as the Windows registry. Examination and analysis of these artefacts is focussed on recovering the data extracting information relevant to a digital investigation. This information is currently underused in most digital investigations. With this in mind, this thesis considers system generated artefacts that contain information concerning the activities that occur on a Windows system and will often contain evidence relevant to a digital investigation. The objective of this research is to develop an architecture that simplifies and automates the collection of forensic evidence from system generated files where the data structures may be either known or in a structured but poorly understood (unknown) format. The hypothesis is that it should be feasible to develop an architecture that will be to integrate forensic data extracted from a range of system generated files and to implement a proof of concept prototype tool, capable of visualising the Event logs and Swap files. This thesis presents an architecture to enable the forensic investigator to analyse and visualise a range of system generated artefacts for which the internal arrangement of data is either well structured and understood or those for which the internal arrangement of the data is unclear or less publicised (known and not known data structures). The architecture reveals methods to access, view and analyse system generated artefacts. The architecture is intended to facilitate the extraction and analysis of operating system generated artefacts while being extensible, flexible and reusable. The architectural concepts are tested using a prototype implementation focussed the Windows Event Logs and the Swap Files. Event logs reveal evidence regarding logons, authentication, account and privilege use and can address questions relating to which user accounts were being used and which machines were accessed. Swap file contains fragments of data, remnants or entire documents, e-mail messages or results of internet browsing which reveal past user activities. Issues relating to understanding and visualising artefacts data structure are discussed and possible solutions are explored. The architecture is developed by examining the requirements and methods with respect to the needs of computer forensic investigations and forensic process models with the intention to develop a new multiplatform tool to visualise the content of Event logs and Swap files. This tool is aimed at displaying data contained in event logs and swap files in a graphical manner. This should enable the detection of information which may support the investigation. Visualisation techniques can also aid the forensic investigators in identifying suspicious events and files, making such techniques more feasible for consideration in a wider range of cases and, in turn, improve standard procedures. The tool is developed to fill a gap between capabilities of certain other open source tools which visualise the Event logs and Swap files data in a text based format only.
334

On tracing attackers of distributed denial-of-service attack through distributed approaches. / CUHK electronic theses & dissertations collection

January 2007 (has links)
For the macroscopic traceback problem, we propose an algorithm, which leverages the well-known Chandy-Lamport's distributed snapshot algorithm, so that a set of border routers of the ISPs can correctly gather statistics in a coordinated fashion. The victim site can then deduce the local traffic intensities of all the participating routers. Given the collected statistics, we provide a method for the victim site to locate the attackers who sent out dominating flows of packets. Our finding shows that the proposed methodology can pinpoint the location of the attackers in a short period of time. / In the second part of the thesis, we study a well-known technique against the microscopic traceback problem. The probabilistic packet marking (PPM for short) algorithm by Savage et al. has attracted the most attention in contributing the idea of IP traceback. The most interesting point of this IP traceback approach is that it allows routers to encode certain information on the attack packets based on a pre-determined probability. Upon receiving a sufficient number of marked packets, the victim (or a data collection node) can construct the set of paths the attack packets traversed (or the attack graph), and hence the victim can obtain the locations of the attackers. In this thesis, we present a discrete-time Markov chain model that calculates the precise number of marked packets required to construct the attack graph. / The denial-of-service attack has been a pressing problem in recent years. Denial-of-service defense research has blossomed into one of the main streams in network security. Various techniques such as the pushback message, the ICMP traceback, and the packet filtering techniques are the remarkable results from this active field of research. / The focus of this thesis is to study and devise efficient and practical algorithms to tackle the flood-based distributed denial-of-service attacks (flood-based DDoS attack for short), and we aim to trace every location of the attacker. In this thesis, we propose a revolutionary, divide-and-conquer trace-back methodology. Tracing back the attackers on a global scale is always a difficult and tedious task. Alternatively, we suggest that one should first identify Internet service providers (ISPs) that contribute to the flood-based DDoS attack by using a macroscopic traceback approach . After the concerned ISPs have been found, one can narrow the traceback problem down, and then the attackers can be located by using a microscopic traceback approach. / Though the PPM algorithm is a desirable algorithm that tackles the microscopic traceback problem, the PPM algorithm is not perfect as its termination condition is not well-defined in the literature. More importantly, without a proper termination condition, the traceback results could be wrong. In this thesis, we provide a precise termination condition for the PPM algorithm. Based on the precise termination condition, we devise a new algorithm named the rectified probabilistic packet marking algorithm (RPPM algorithm for short). The most significant merit of the RPPM algorithm is that when the algorithm terminates, it guarantees that the constructed attack graph is correct with a specified level of confidence. Our finding shows that the RPPM algorithm can guarantee the correctness of the constructed attack graph under different probabilities that the routers mark the attack packets and different structures of the network graphs. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means to enhance the reliability of the PPM algorithm. / Wong Tsz Yeung. / "September 2007." / Adviser: Man Hon Wong. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4867. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 176-185). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
335

Server's anonymity attack and protection of P2P-Vod systems. / Server's anonymity attack and protection of peer-to-peer video on demand systems

January 2010 (has links)
Lu, Mengwei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 52-54). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Introduction of P2P-VoD Systems --- p.5 / Chapter 2.1 --- Major Components of the System --- p.5 / Chapter 2.2 --- Peer Join and Content Discovery --- p.6 / Chapter 2.3 --- Segment Sizes and Replication Strategy --- p.7 / Chapter 2.4 --- Piece Selection --- p.8 / Chapter 2.5 --- Transmission Strategy --- p.9 / Chapter 3 --- Detection Methodology --- p.10 / Chapter 3.1 --- Capturing Technique --- p.11 / Chapter 3.2 --- Analytical Framework --- p.15 / Chapter 3.3 --- Results of our Detection Methodology --- p.24 / Chapter 4 --- Protective Architecture --- p.25 / Chapter 4.1 --- Architecture Overview --- p.25 / Chapter 4.2 --- Content Servers --- p.27 / Chapter 4.3 --- Shield Nodes --- p.28 / Chapter 4.4 --- Tracker --- p.29 / Chapter 4.5 --- A Randomized Assignment Algorithm --- p.30 / Chapter 4.6 --- Seeding Algorithm --- p.31 / Chapter 4.7 --- Connection Management Algorithm --- p.33 / Chapter 4.8 --- Advantages of the Shield Nodes Architecture --- p.33 / Chapter 4.9 --- Markov Model for Shield Nodes Architecture Against Single Track Anonymity Attack --- p.35 / Chapter 5 --- Experiment Result --- p.40 / Chapter 5.1 --- Shield Node architecture against anonymity attack --- p.40 / Chapter 5.1.1 --- Performance Analysis for Single Track Anonymity Attack --- p.41 / Chapter 5.1.2 --- Experiment Result on PlanetLab for Single Track Anonymity Attack --- p.42 / Chapter 5.1.3 --- Parallel Anonymity Attack --- p.44 / Chapter 5.2 --- Shield Nodes architecture-against DoS attack --- p.45 / Chapter 6 --- Related Work --- p.48 / Chapter 7 --- Future Work --- p.49 / Chapter 8 --- Conclusion --- p.50
336

Novel Cryptographic Primitives and Protocols for Censorship Resistance

Dyer, Kevin Patrick 24 July 2015 (has links)
Internet users rely on the availability of websites and digital services to engage in political discussions, report on newsworthy events in real-time, watch videos, etc. However, sometimes those who control networks, such as governments, censor certain websites, block specific applications or throttle encrypted traffic. Understandably, when users are faced with egregious censorship, where certain websites or applications are banned, they seek reliable and efficient means to circumvent such blocks. This tension is evident in countries such as a Iran and China, where the Internet censorship infrastructure is pervasive and continues to increase in scope and effectiveness. An arms race is unfolding with two competing threads of research: (1) network operators' ability to classify traffic and subsequently enforce policies and (2) network users' ability to control how network operators classify their traffic. Our goal is to understand and progress the state-of-the-art for both sides. First, we present novel traffic analysis attacks against encrypted communications. We show that state-of-the-art cryptographic protocols leak private information about users' communications, such as the websites they visit, applications they use, or languages used for communications. Then, we investigate means to mitigate these privacy-compromising attacks. Towards this, we present a toolkit of cryptographic primitives and protocols that simultaneously (1) achieve traditional notions of cryptographic security, and (2) enable users to conceal information about their communications, such as the protocols used or websites visited. We demonstrate the utility of these primitives and protocols in a variety of real-world settings. As a primary use case, we show that these new primitives and protocols protect network communications and bypass policies of state-of-the-art hardware-based and software-based network monitoring devices.
337

Ant tree miner amyntas for intrusion detection

Botes, Frans Hendrik January 2018 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018. / With the constant evolution of information systems, companies have to acclimatise to the vast increase of data flowing through their networks. Business processes rely heavily on information technology and operate within a framework of little to no space for interruptions. Cyber attacks aimed at interrupting business operations, false intrusion detections and leaked information burden companies with large monetary and reputational costs. Intrusion detection systems analyse network traffic to identify suspicious patterns that intent to compromise the system. Classifiers (algorithms) are used to classify the data within different categories e.g. malicious or normal network traffic. Recent surveys within intrusion detection highlight the need for improved detection techniques and warrant further experimentation for improvement. This experimental research project focuses on implementing swarm intelligence techniques within the intrusion detection domain. The Ant Tree Miner algorithm induces decision trees by using ant colony optimisation techniques. The Ant Tree Miner poses high accuracy with efficient results. However, limited research has been performed on this classifier in other domains such as intrusion detection. The research provides the intrusion detection domain with a new algorithm that improves upon results of decision trees and ant colony optimisation techniques when applied to the domain. The research has led to valuable insights into the Ant Tree Miner classifier within a previously unknown domain and created an intrusion detection benchmark for future researchers.
338

Construction and formal security analysis of cryptographic schemes in the public key setting

Baek, Joonsang, 1973- January 2004 (has links)
Abstract not available
339

Utilising behaviour history and fuzzy trust levels to enhance security in ad-hoc networks

Hallani, Houssein, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2007 (has links)
A wireless Ad-hoc network is a group of wireless devices that communicate with each other without utilising any central management infrastructure. The operation of Ad-hoc networks depends on the cooperation among nodes to provide connectivity and communication routes. However, such an ideal situation may not always be achievable in practice. Some nodes may behave maliciously, resulting in degradation of the performance of the network or even disruption of its operation altogether. The ease of establishment, along with the mobility capabilities that these networks offer, provides many advantages. On the other hand, these very characteristics, as well as the lack of any centralised administration, are the root of several nontrivial challenges in securing such networks. One of the key objectives of this thesis is to achieve improvements in the performance of Ad-hoc networks in the presence of malicious nodes. In general, malicious nodes are considered as nodes that subvert the capability of the network to perform its expected functions. Current Ad-hoc routing protocols, such as the Ad-hoc On demand Distance Vector (AODV), have been developed without taking the effects of misbehaving nodes into consideration. In this thesis, to mitigate the effects of such nodes and to attain high levels of security and reliability, an approach that is based on the utilisation of the behaviour history of all member nodes is proposed. The aim of the proposed approach is to identify routes between the source and the destination, which enclose no, or if that is not possible, a minimal number, of malicious nodes. This is in contrast to traditional approaches that predominantly tend to use other criteria such as shortest path alone. Simulation and experimental results collected after applying the proposed approach, show significant improvements in the performance of Ad-hoc networks even in the presence of malicious nodes. However, to achieve further enhancements, this approach is expanded to incorporate trust levels between the nodes comprising the Ad-hoc network. Trust is an important concept in any relation among entities that comprise a group or network. Yet it is hard to quantify trust or define it precisely. Due to the dynamic nature of Ad-hoc networks, quantifying trust levels is an even more challenging task. This may be attributed to the fact that different numbers of factors can affect trust levels between the nodes of Ad-hoc networks. It is well established that fuzzy logic and soft computing offer excellent solutions for handling imprecision and uncertainties. This thesis expands on relevant fuzzy logic concepts to propose an approach to establish quantifiable trust levels between the nodes of Ad-hoc networks. To achieve quantification of the trust levels for nodes, information about the behaviour history of the nodes is collected. This information is then processed to assess and assign fuzzy trust levels to the nodes that make up the Ad-hoc network. These trust levels are then used in the routing decision making process. The performance of an Ad-hoc network that implements the behaviour history based approach using OPtimised NETwork (OPNET) simulator is evaluated for various topologies. The overall collected results show that the throughput, the packet loss rate, and the round trip delay are significantly improved when the behaviour history based approach is applied. Results also show further enhancements in the performance of the Ad-hoc network when the proposed fuzzy trust evaluation approach is incorporated with a slight increase in the routing traffic overhead. Given the improvements achieved when the fuzzy trust approach is utilised, for further enhancements of security and reliability of Ad-hoc networks, future work to combine this approach with other artificial intelligent approaches may prove fruitful. The learning capability of Artificial Neural Networks makes them a prime target for combination with fuzzy based systems in order to improve the proposed trust level evaluation approach. / Doctor of Philosophy (PhD)
340

Contingency planning models for Government agencies

January 1996 (has links)
This report describes a research study into the current situation within Federal, State Government and selected private sector agencies, assessing contingency plans for Information Systems and suggests models for state-wide planning against Information Systems disasters. Following a brief look at various phases of contingency plan development, the study looks into the factors that prompt organisations to prepare contingency plans. The project involved a survey of current Information Systems contingency plans in the government agencies in the states of Victoria, Western Australia, South Australia, New South Wales and in the Australian Capital Territory. It also included two major banks, an insurance company and two computer services bureaux in the private sector within New South Wales. The survey determined that particular factors play important roles in the decision by organisations to commence contingency planning. These include actual disaster experience, senior management support, auditor's comments, legal requirements, risk analysis and business impact study, economic considerations, insurance requirements, contract commitment, new staff and introduction of new hardware and software. The critical success factors in contingency planning include regular maintenance and testing of the plan. The project also discusses the current contingency planning environment within New South Wales Government agencies and suggests cost-effective models for state-wide adoption.

Page generated in 0.1444 seconds