• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 692
  • 38
  • 37
  • 14
  • 11
  • 7
  • 6
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 1072
  • 1072
  • 305
  • 274
  • 222
  • 216
  • 208
  • 199
  • 156
  • 115
  • 110
  • 104
  • 101
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Semi-supervised learning of bitmask pairs for an anomaly-based intrusion detection system

Ardolino, Kyle R. January 2008 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical Engineering, 2008. / Includes bibliographical references.
312

Anomaly-based intrusion detection using using lightweight stateless payload inspection

Nwanze, Nnamdi Chike. January 2009 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2009. / Includes bibliographical references.
313

Deploying DNSSEC in islands of security

Murisa, Wesley Vengayi 31 March 2013 (has links)
The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. / LaTeX with hyperref package / pdfTeX-1.40.10
314

Amber : a aero-interaction honeypot with distributed intelligence

Schoeman, Adam January 2015 (has links)
For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
315

The Role of Self-Efficacy in Computer Security Behavior: Developing the Construct of Computer Security Self-Efficacy (CSSE)

Clarke, Marlon Renese 01 January 2011 (has links)
As organizations have become more dependent on networked information systems (IS) to conduct their business operations, their susceptibility to various threats to information security has also increased. Research has consistently identified the inappropriate security behavior of the users as the most significant of these threats. Various factors have been identified as contributing to these inappropriate security behaviors, however, not enough is known about the role of social factors in mediating these behaviors. This study developed a new computer security self-efficacy (CSSE) construct, identified items of CSSE in the context of individuals' use of encrypted e-mail, and determined the validity and reliability of the items of CSSE. Further, significant factors of CSSE were identified. First, a qualitative phase comprising focus groups and an expert panel was used to identify valid items of CSSE, develop a new instrument to measure the new CSSE construct, and validate the new CSSE instrument. After completing the qualitative phase, a quantitative phase was employed to collect empirical data on the CSSE items. The CSSE measurement instrument was administered to IS users at a major university in the southeastern United States and 292 responses were received. The collected data was statistically analyzed to identify significant factors of CSSE and the items of CSSE that demonstrate high reliability. Factor analysis was performed using Principal Component Analysis (PCA) and identified four significant and highly reliable factors of CSSE with a cumulative variance of nearly 68%. The four factors were named Performance Accomplishments and Technical Support, Goal Commitment and Resource Availability, Experience Level, and Individual Characteristics. Additionally, 35 items of CSSE were identified as possessing high reliability. This study contributes to advancing of the body of knowledge regarding the use of e-mail encryption by developing a new CSSE construct and extending Computer Self-Efficacy research into the area of computer security and e-mail encryption. Further, by identifying factors of CSSE, an understanding of what IS users believe will impact their ability to use encryption to send e-mail messages is obtained. This understanding can aid in enhancing the use of encryption mechanisms to send e-mail, promoting positive computer security behavior, and so contribute positively to IS practice.
316

Automated Verification of Safety and Liveness Properties for Distributed Protocols

Yao, Jianan January 2025 (has links)
The world relies on distributed systems, but these systems are increasingly complex and hard to design and implement correctly. This is due to the intrinsic non-determinism from asynchronous node communications, various failure scenarios, and potentially adversarial participants. To address this problem, developers are starting to turn to formal verification techniques to prove the correctness of distributed systems. This involves formally verifying that desired safety and liveness properties hold for the distributed protocol. A safety property is an invariant that should hold true at any point in a system’s execution. It ensures the protocol does not reach invalid or dangerous states. A liveness property, on the contrary, describes that some desired good event will eventually happen. There have long been efforts to formally verify safety and liveness of distributed protocols. However, the proof burden is usually prohibitively high for broad real-world adoption. Although there has been a growing list of methods that try to automate the verification of distributed protocols, in particular their safety properties, none of these tools scale to real-world complex protocols with theoretical guarantee.In this dissertation, I introduce our verification methods and tools for verifying distributed protocols with little to no human effort. The thesis consists of two parts. In the first part, I present our two inductive invariant inference tools, DistAI and DuoAI, which automatically verify safety properties of distributed protocols. In DistAI, I introduce a simulation-enumeration-refinement framework for invariant reasoning, and DuoAI extends it to more complex protocols and existential quantifiers. The evaluation shows that DuoAI outperforms alternative methods in both the number of protocols verified and the speed to verify them, including solving Paxos more than two orders of magnitude faster than any alternative method. In the second part, I introduce LVR, our liveness verification tool for distributed protocols. The key theoretical insight is that liveness verification can be soundly reduced to the verification of a list of simpler safety properties, which can often be proved automatically utilizing an arsenal of invariant inference tools. The reduction leaves one remaining task---to synthesize a ranking function to prove termination, for which I present a new and effective pipeline. LVR is successfully applied to eight distributed protocols and is the first to demonstrate that liveness properties of distributed protocols can be proved with limited human input.
317

Analysing layered security protocols

Gibson-Robinson, Thomas January 2013 (has links)
Many security protocols are built as the composition of an application-layer protocol and a secure transport protocol, such as TLS. There are many approaches to proving the correctness of such protocols. One popular approach is verification by abstraction, in which the correctness of the application-layer protocol is proven under the assumption that the transport layer satisfies certain properties, such as confidentiality. Following this approach, we adapt the strand spaces model in order to analyse application-layer protocols that depend on an underlying secure transport layer, including unilaterally authenticating secure transport protocols, such as unilateral TLS. Further, we develop proof rules that enable us to prove the correctness of application-layer protocols that use either unilateral or bilateral secure transport protocols. We then illustrate these rules by proving the correctness of WebAuth, a single-sign-on protocol that makes extensive use of unilateral TLS. In this thesis we also present a full proof of the model's soundness. In particular, we prove that, subject to a suitable independence assumption, if there is an attack against the application-layer protocol when layered on top of a particular secure transport protocol, then there is an attack against the abstracted model of the application-layer protocol. In contrast to existing work in this area, the independence assumption consists of eight statically-checkable conditions, meaning that it can be checked statically, rather than having to consider all possible runs of the protocol. Lastly, we extend the model to allow protocols that consist of an arbitrary number of layers to be proven correct. In this case, we prove the correctness of the intermediate layers using the high-level strand spaces model, by abstracting away from the underlying transport-layers. Further, we extend the above soundness results in order to prove that the multi-layer approach is sound. We illustrate the effectiveness of our technique by proving the correctness of a couple of simple multi-layer protocols.
318

Role-Based Access Control Administration of Security Policies and Policy Conflict Resolution in Distributed Systems

Kibwage, Stephen Sakawa 01 February 2015 (has links)
Security models using access control policies have over the years improved from Role-based access control (RBAC) to newer models which have added some features like support for distributed systems and solving problems in older security policy models such as identifying policy conflicts. Access control policies based on hierarchical roles provide more flexibility in controlling system resources for users. The policies allow for granularity when extended to have both allow and deny permissions as well as weighted priority attribute for the rules in the policies. Such flexibility allows administrators to succinctly specify access for their system resources but also prone to conflict. This study found that conflicts in access control policies were still a problem even in recent literature. There have been successful attempts at using algorithms to identify the conflicts. However, the conflicts were only identified but not resolved or averted and system administrators still had to resolve the policy conflicts manually. This study proposed a weighted attribute administration model (WAAM) containing values that feed the calculation of a weighted priority attribute. The values are tied to the user, hierarchical role, and secured objects in a security model to ease their administration and are included in the expression of the access control policy. This study also suggested a weighted attribute algorithm (WAA) using these values to resolve any conflicts in the access control policies. The proposed solution was demonstrated in a simulation that combined the WAAM and WAA. The simulation's database used WAAM and had data records for access control policies, some of which had conflicts. The simulation then showed that WAA could both identify and resolve access control policy (ACP) conflicts while providing results in sub-second time. The WAA is extensible so implementing systems can extend WAA to meet specialized needs. This study shows that ACP conflicts can be identified and resolved during authorization of a user into a system.
319

Concurrent auditing on computerized accounting systems

梁松柏, Leung, Chung-pak. January 1998 (has links)
published_or_final_version / Business Administration / Master / Master of Business Administration
320

A secure e-course copyright protection infrastructure

Yau, Cho-ki, Joe., 邱祖淇. January 2006 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy

Page generated in 0.0687 seconds