• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 374
  • 40
  • 38
  • 26
  • 23
  • 12
  • 8
  • 8
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 697
  • 697
  • 298
  • 274
  • 156
  • 147
  • 112
  • 108
  • 107
  • 104
  • 100
  • 100
  • 87
  • 86
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Enhancements of the Non-linear Knapsack Cryptosystem

Tu, Zhiqi January 2006 (has links)
Nowadays all existing public key cryptosystems are classified into three categories relied on different mathematical foundations. The first one is based on the difficulty of factoring the product of two big prime numbers. The representatives are the RSA and the Rabin cryptosystems. The second one such as the ElGamal cryptosystem is based on the discrete logarithm problem. The last one is based on the NP-completeness of the knapsack problem. The first two categories survived crypto attacks, whereas the last one was broken and there has been no attempt to use such a cryptosystem. In order to save the last category, Kiriyama proposed a new public key cryptosystem based on the non-linear knapsack problem, which is an NP-complete problem. Due to the non-linear property of the non-linear knapsack problem, this system resists all known attacks to the linear knapsack problem. Based on his work, we extend our research in several ways. Firstly, we propose an encrypted secret sharing scheme. We improve the security of shares by our method over other existing secret sharing schemes. Simply speaking, in our scheme, it would be hard for outsiders to recover a secret even if somehow they could collect all shares, because each share is already encrypted when it is generated. Moreover, our scheme is efficient. Then we propose a multiple identities authentication scheme, developed on the basis of the non-linear knapsack scheme. It verifies the ownership of an entity's several identities in only one execution of our scheme. More importantly, it protects the privacy of the entities from outsiders. Furthermore, it can be used in resource-constrained devices due to low computational complexity. We implement the above schemes in the C language under the Linux system. The experimental results show the high efficiency of our schemes, due to low computational complexity of the non-linear knapsack problem, which works as the mathematical foundation of our research.
132

A Verified Algorithm for Detecting Conflicts in XACML Access Control Rules

St-Martin, Michel 11 January 2012 (has links)
The goal of this thesis is to find provably correct methods for detecting conflicts between XACML rules. A conflict occurs when one rule permits a request and another denies that same request. As XACML deals with access control, we can help prevent unwanted access by verifying that it contains rules that do not have unintended conflicts. In order to help with this, we propose an algorithm to find these conflicts then use the Coq Proof Assistant to prove correctness of this algorithm. The algorithm takes a rule set specified in XACML and returns a list of pairs of indices denoting which rules conflict. It is then up to the policy writer to see if the conflicts are intended, or if they need modifying. Since we will prove that this algorithm is sound and complete, we can be assured that the list we obtain is complete and only contains true conflicts.
133

Replication, Security, and Integrity of Outsourced Data in Cloud Computing Systems

Barsoum, Ayad Fekry 14 February 2013 (has links)
In the current era of digital world, the amount of sensitive data produced by many organizations is outpacing their storage ability. The management of such huge amount of data is quite expensive due to the requirements of high storage capacity and qualified personnel. Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their data to be stored on remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden of large local data storage at the organization's end. For an increased level of scalability, availability and durability, some customers may want their data to be replicated on multiple servers across multiple data centers. The more copies the CSP is asked to store, the more fees the customers are charged. Therefore, customers need to have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service contract, and these copies remain intact. In this thesis we address the problem of creating multiple copies of a data file and verifying those copies stored on untrusted cloud servers. We propose a pairing-based provable multi-copy data possession (PB-PMDP) scheme, which provides an evidence that all outsourced copies are actually stored and remain intact. Moreover, it allows authorized users (i.e., those who have the right to access the owner's file) to seamlessly access the file copies stored by the CSP, and supports public verifiability. We then direct our study to the dynamic behavior of outsourced data, where the data owner is capable of not only archiving and accessing the data copies stored by the CSP, but also updating and scaling (using block operations: modification, insertion, deletion, and append) these copies on the remote servers. We propose a new map-based provable multi-copy dynamic data possession (MB-PMDDP) scheme that verifies the intactness and consistency of outsourced dynamic multiple data copies. To the best of our knowledge, the proposed scheme is the first to verify the integrity of multiple copies of dynamic data over untrusted cloud servers. As a complementary line of research, we consider protecting the CSP from a dishonest owner, who attempts to get illegal compensations by falsely claiming data corruption over cloud servers. We propose a new cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables mutual trust between them. In addition, the proposed scheme ensures that authorized users receive the latest version of the outsourced data, and enables the owner to grant or revoke access to the data stored by cloud servers.
134

Analysis and optimization of MAC protocols for wireless networks

Shu, Feng Unknown Date (has links) (PDF)
Medium access control (MAC) plays a vital role in satisfying the varied quality of service (QoS) requirements in wireless networks. Many MAC solutions have been proposed for these networks, and performance evaluation, optimization and enhancement of these MAC protocols is needed. In this thesis, we focus on the analysis and optimization of MAC protocols for some recently emerged wireless technologies targeted at low-rate and multimedia applications.
135

Secure information flow for inter-organisational collaborative environments

Bracher, Shane Unknown Date (has links)
Collaborative environments allow users to share and access data across networks spanning multiple administrative domains and beyond organisational boundaries. This poses several security concerns such as data confidentiality, data privacy and threats to improper data usage. Traditional access control mechanisms focus on centralised systems and implicitly assume that all resources reside in the one domain. This serves as a critical limitation for inter-organisational collaborative environments, which are characteristically decentralised, distributed and heterogeneous. A consequence of the lack of suitable access control mechanisms for inter-organisational collaborative environments is that data owners relinquish all control over data they release. In these environments, we can reasonably consider more complex cases where documents may have multiple contributors, all with differing access control requirements. Facilitating such cases, as well as maintaining control over the document’s content, its structure and its flow path as it circulates through multiple administrative domains, is a non-trival issue. This thesis proposes an architecture model for specifying and enforcing access control restrictions on sensitive data that follows a pre-defined inter-organisational workflow. Our approach is to embed access control enforcement within the workflow object (e.g. the circulating document containing sensitive data) as opposed to relying on each administrative domain to enforce the access control policies. The architecture model achieves this using cryptographic access control – a concept that relies on cryptography to enforce access control policies.
136

Security and privacy model for association databases

Kong, Yibing. January 2003 (has links)
Thesis (M.Comp.Sc.)--University of Wollongong, 2003. / Typescript. Bibliographical references: leaf 93-96.
137

Federated Access Management for Collaborative Environments

January 2016 (has links)
abstract: Access control has been historically recognized as an effective technique for ensuring that computer systems preserve important security properties. Recently, attribute-based access control (ABAC) has emerged as a new paradigm to provide access mediation by leveraging the concept of attributes: observable properties that become relevant under a certain security context and are exhibited by the entities normally involved in the mediation process, namely, end-users and protected resources. Also recently, independently-run organizations from the private and public sectors have recognized the benefits of engaging in multi-disciplinary research collaborations that involve sharing sensitive proprietary resources such as scientific data, networking capabilities and computation time and have recognized ABAC as the paradigm that suits their needs for restricting the way such resources are to be shared with each other. In such a setting, a robust yet flexible access mediation scheme is crucial to guarantee participants are granted access to such resources in a safe and secure manner. However, no consensus exists either in the literature with respect to a formal model that clearly defines the way the components depicted in ABAC should interact with each other, so that the rigorous study of security properties to be effectively pursued. This dissertation proposes an approach tailored to provide a well-defined and formal definition of ABAC, including a description on how attributes exhibited by different independent organizations are to be leveraged for mediating access to shared resources, by allowing for collaborating parties to engage in federations for the specification, discovery, evaluation and communication of attributes, policies, and access mediation decisions. In addition, a software assurance framework is introduced to support the correct construction of enforcement mechanisms implementing our approach by leveraging validation and verification techniques based on software assertions, namely, design by contract (DBC) and behavioral interface specification languages (BISL). Finally, this dissertation also proposes a distributed trust framework that allows for exchanging recommendations on the perceived reputations of members of our proposed federations, in such a way that the level of trust of previously-unknown participants can be properly assessed for the purposes of access mediation. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2016
138

Workflow management systems, their security and access control mechanisms

Chehrazi, Golriz January 2007 (has links)
This paper gives an overview of workflow management systems (WfMSs) and their security requirements with focus on access mechanisms. It is a descriptive paper in which we examine the state of the art of workflow systems, describe what security risks affect WfMSs in particular, and how these can be diminiuished. WfMSs manage, illustrate and support business processes. They contribute to the performance, automation and optimization of processes, which is important in the global economy today. The security of process flows is important, since the sensitive business data need to be protected to inhibit illegal activities, such as blackmailing, imitation and fraud and to provide for good customer service. This paper focuses on access mechanisms, because they are basic security mechanisms used by WfMSs assuring that only authorized users are provided access to data and resources. Also because of the unsecurity of the Internet, which is commonly used as infrastructure of Workflow systems, additional security mechanisms, such as PKIs, digital signatures and SSL have to be used to provide secure workflows. Depending on the particular requirements in workflow systems, different extensional access control (AC) mechanisms have been developed to maintain security. But when it comes to commercially used WfMSs, the availability of the system is of utmost importance. It is the prerequisite for the system to be employed by companies. The problem is that there is always a trade-off between availability of the system and security. Because this trade off is generally solved in favor of availability, a major part of the developed AC mechanisms are not used in commercially used WfMS. After the first part of this paper which is rather theoretical, we examine a commercial WfMS, namely IBM's MQ Workflow , and its security mechanisms. We show vulnerabilities of the system that could be abused by attackers. Afterwards, we show which security mechanisms, in particular, AC mechanisms are provided to secure against threats. We conclude with a summary, which highlights the difference between security concepts developed in the research area and those really implemented by the commercially used WfMS.
139

A prototype to discover and penetrate access restricted web pages in an Extranet

Van Jaarsveld, Rudi 13 October 2014 (has links)
M.Sc. (Information Technology) / The internet grew exponentially over the last decade. With more information available on the web, search engines, with the help of web crawlers also known as web bots, gather information on the web and indexes billions of web pages. This indexed information helps users to find relevant information on the internet. An extranet is a sub-set of the internet. This part of the web controls access for a selected audience to a specific resource and are also referred to as restricted web sites. Various industries use extranets for different purposes and store different types of information on it. Some of this information could be of a confidential nature and therefore it is important that this information is adequately secured and should not be accessible by web bots. In some cases web bots can accidently stumble onto poorly secured pages in an extranet and add the restricted web pages to their indexed search results. Search engines like Google, that are designed to filter through a large amount of data, can accidently crawl onto access restricted web pages if such pages are not secured properly. Researchers found that it is possible for web crawlers of well known search engines to access poorly secured web pages in access restricted web sites. The risk is that not all web bots have good intentions and that some have a more malicious intent. These malicious web bots search for vulnerabilities in extranets and use the vulnerabilities to access confidential information. The main objective of this dissertation is to develop a prototype web bot called Ferret that would crawl through a web site developed by a web developer(s). Ferret will try to discover and access restricted web pages that are poorly secured in the extranet and report the weaknesses. From the information and findings of this research a best practice guideline will be drafted that will help developers to ensure access restricted web pages are secured and invisible to web bots.
140

Information security in the client/server environment

Botha, Reinhardt A 23 August 2012 (has links)
M.Sc. (Computer Science) / Client/Server computing is currently one of the buzzwords in the computer industry. The client/server environment can be defined as an open systems environment. This openness of the client/server environment makes it a very popular environment to operate in. As information are exceedingly accessed in a client/server manner certain security issues arise. In order to address this definite need for a secure client/server environment it is necessary to firstly define the client/server environment. This is accomplished through defining three possible ways to partition programs within the client/server environment. Security, or secure systems, have a different meaning for different people. This dissertation defines six attributes of information that should be maintained in order to have secure information. For certain environments some of these attributes may be unnecessary or of lesser importance. Different security techniques and measures are discussed and classified in terms of the client/server partitions and the security attributes that are maintained by them. This is presented in the form of a matrix and provides an easy reference to decide on security measures in the client/server environment in order to protect a specific aspect of the information. The importance of a security policy and more specifically the influence of the client/server environment on such a policy are discussed and it is demonstrated that the framework can assist in drawing up a security policy for a client/server environment. This dissertation furthermore defines an electronic document .management system as a case study. It is shown that the client/server environment is a suitable environment for such a system. The security needs and problems are identified and classified in terms of the security attributes. Solutions to the problems are discussed in order to provide a reasonably secure electronic document management system environment.

Page generated in 0.0709 seconds