• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 426
  • 38
  • 35
  • 29
  • 19
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 4
  • 3
  • Tagged with
  • 749
  • 749
  • 457
  • 342
  • 179
  • 179
  • 158
  • 122
  • 112
  • 112
  • 108
  • 103
  • 100
  • 85
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A generalized trust model using network reliability

Mahoney, Glenn R. 10 April 2008 (has links)
Economic and social activity is increasingly reflected in operations on digital objects and network-mediated interactions between digital entities. Trust is a prerequisite for many of these interactions, particularly if items of value are to be exchanged. The problem is that automated handling of trust-related concerns between distributed entities is a relatively new concept and many existing capabilities are limited or application-specific, particularly in the context of informal or ad-hoc relationships. This thesis contributes a new family of probabilistic trust metrics based on Network Reliability called the Generic Reliability Trust Model (GRTM). This approach to trust modelling is demonstrated with a new, flexible trust metric called Hop-count Limited Transitive Trust (HLTT), and is also applied to an implementation of the existing Maurer Confidence Valuation (MCV) trust metric. All metrics in the GRTM framework utilize a common probabilistic trust model which is the solution of a general reliability problem. Two generalized algorithms are presented for computing GRTM based on inclusion-exclusion and factoring. A conservative approximation heuristic is defined which leads to more practical algorithm performance. A JAVA-based implementation of these algorithms for HLTT and MCV trust metrics is used to demonstrate the impact of the approximation. An XML-based trust-graph representation and a random power-law trust graph generator is used to simulate large informal trust networks.
2

A new approach to dynamic internet risk analysis

18 August 2009 (has links)
D.Econ.
3

Assessing program code through static structural similarity

Naude, Kevin Alexander January 2007 (has links)
Learning to write software requires much practice and frequent assessment. Consequently, the use of computers to assist in the assessment of computer programs has been important in supporting large classes at universities. The main approaches to the problem are dynamic analysis (testing student programs for expected output) and static analysis (direct analysis of the program code). The former is very sensitive to all kinds of errors in student programs, while the latter has traditionally only been used to assess quality, and not correctness. This research focusses on the application of static analysis, particularly structural similarity, to marking student programs. Existing traditional measures of similarity are limiting in that they are usually only effective on tree structures. In this regard they do not easily support dependencies in program code. Contemporary measures of structural similarity, such as similarity flooding, usually rely on an internal normalisation of scores. The effect is that the scores only have relative meaning, and cannot be interpreted in isolation, ie. they are not meaningful for assessment. The SimRank measure is shown to have the same problem, but not because of normalisation. The problem with the SimRank measure arises from the fact that its scores depend on all possible mappings between the children of vertices being compared. The main contribution of this research is a novel graph similarity measure, the Weighted Assignment Similarity measure. It is related to SimRank, but derives propagation scores from only the locally optimal mapping between child vertices. The resulting similarity scores may be regarded as the percentage of mutual coverage between graphs. The measure is proven to converge for all directed acyclic graphs, and an efficient implementation is outlined for this case. Attributes on graph vertices and edges are often used to capture domain specific information which is not structural in nature. It has been suggested that these should influence the similarity propagation, but no clear method for doing this has been reported. The second important contribution of this research is a general method for incorporating these local attribute similarities into the larger similarity propagation method. An example of attributes in program graphs are identifier names. The choice of identifiers in programs is arbitrary as they are purely symbolic. A problem facing any comparison between programs is that they are unlikely to use the same set of identifiers. This problem indicates that a mapping between the identifier sets is required. The third contribution of this research is a method for applying the structural similarity measure in a two step process to find an optimal identifier mapping. This approach is both novel and valuable as it cleverly reuses the similarity measure as an existing resource. In general, programming assignments allow a large variety of solutions. Assessing student programs through structural similarity is only feasible if the diversity in the solution space can be addressed. This study narrows program diversity through a set of semantic preserving program transformations that convert programs into a normal form. The application of the Weighted Assignment Similarity measure to marking student programs is investigated, and strong correlations are found with the human marker. It is shown that the most accurate assessment requires that programs not only be compared with a set of good solutions, but rather a mixed set of programs of varying levels of correctness. This research represents the first documented successful application of structural similarity to the marking of student programs.
4

Simultaneous connection management and protection in a distributed multilevel security environment

Sears, Joseph D. 09 1900 (has links)
Approved for public release; distribution is unlimited / The Naval Postgraduate School Center for Information Systems Security Studies and Research (CISR) is designing and developing a distributed multilevel secure (MLS) network known as the Monterey Security Architecture (MYSEA). MYSEA will permit the delivery of unmodified commercial off the shelf productivity software applications and data from a large number of single-level network domains (e.g., NIPRNET, SIPRNET, JWICS) to a trusted distributed operating environment that enforces MLS policies. The analysis and development of a communications framework necessary to support connections between multiple MLS servers and a set of high assurance network appliances supporting simultaneous access to multiple single level networks and their concurrent connection management is required to fulfill the goal of MYSEA. To enable this functionality, modifications to the existing MYSEA server, the development of a new high assurance communications security device - the Trusted Channel Module (TCM), and the implementation of a trusted channel between the MYSEA server and the TCM is required. This document specifies a framework for incorporating the high level design of the TCM, several trusted daemons and databases, plus the incorporation of a trusted channel protocol into MYSEA to enable a distributed MLS environment. / Lieutenant, United States Navy
5

Internet security threats and solutions

14 July 2015 (has links)
M.Com. (Computer Auditing) / Please refer to full text to view abstract
6

Analysis of security solutions in large enterprises

Bailey, Carmen F. 06 1900 (has links)
Approved for public release; distribution is unlimited / The United States Government and private industry are facing challenges in attempting to secure their computer network infrastructure. The purpose of this research was to capture current lessons learned from Government and Industry with respect to solving particular problems associated with the secure management of large networks. Nine thesis questions were generated to look at common security problems faced by enterprises in large networks. Research was predominantly gathered through personal interviews with professionals in the computer security area from both the public and private sector. The data was then analyzed to compile a set of lessons learned by both the public and private sector regarding several leading computer security issues. Some of the problems were challenges such as maintaining and improving security during operating systems upgrades, analyzing lessons learned in configurations management, employee education with regards to following policy and several other challenging issues. The results of this thesis were lessons learned in the areas of employee education, Government involvement in the computer security area and other key security areas. An additional result was the development of case studies based upon the lessons learned. / Naval Postgraduate School author (civilian).
7

Efficient schemes for anonymous credential with reputation support

Yu, Kin-ying., 余見英. January 2012 (has links)
Anonymous credential is an important tool to protect the identity of users in the Internet for various reasons (e.g. free open speech) even when a service provider (SP) requires user authentication. Yet, misbehaving users may use anonymity for malicious purposes and SP would have no way to refrain these users from creating further damages. Revocable anonymous credential allows SP to revoke a particular anonymous user based on the observed behavior of a session the user conducted. However, such kind of all-or-nothing revocation does not work well with the “Web 2.0” applications because it does not give a user a second chance to remedy a misconduct, nor rewards for positive behaviors. Reputation support is vital for these platforms. In this thesis, we propose two schemes with different strengths that solve this privacy and reputation dilemma. Our first scheme, PE(AR)2, aims to empower anonymous credential based authentication with revocation and rewarding support. The scheme is efficient, outperforms PEREA which was the most efficient solution to this problem, with an authentication time complexity O(1) as compared with other related works that has dependency on either the user side storage or the blacklist size. PEREA has a few drawbacks that make it vulnerable and not practical enough. Our scheme fixes PEREA's vulnerability together with efficiency improvement. Our benchmark on PE(AR)2 shows that an SP can handle over 160 requests/second when the credentials store 1000 single-use tickets, which outperforms PEREA with a 460 fold efficiency improvement. Our second scheme, SAC, aims to provide a revocation and full reputation support over anonymous credential based authentication system. With a small efficiency trade-o_ as compared with PE(AR)2, the scheme now supports both positive and negative scores. The scoring mechanism is now much more flexible, that SP could modify the rated score of any active sessions, or declare that no more rating should be given to it and mark it as finalized. SAC provides a much more elastic user side credential storage, there is no practical limit on the number of authentication sessions associated with a credential. Unlike other schemes, SAC make use of a combined membership proof instead of multiple non-membership proofs to distinguish if a session is active, finalized, or blacklisted. This special consideration has contributed to the reduction of efficiency-flexibility trade-off from PE(AR)2, making the scheme stay practical in terms of authentication time. Our benchmark on SAC shows that an SP can handle over 2.9 requests/second when the credentials store 10000 active sessions, which outperforms BLACR-Express (a related work based on pairing cryptography with full reputation support) with a 131 fold efficiency improvement. Then we analyze the potential difficulties for adopting the solutions to any existing web applications. We present a plugin based approach such that our solutions could run on a user web browser directly, and how a service provider could instruct the plugin to communicate using our protocol in HTML context. We conclude our thesis stating the solutions are practical, efficient and easy to integrate in real world scenario, and discuss potential future works. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
8

Dispersability and vulnerability analysis certificate systems

Jung, Eunjin 28 August 2008 (has links)
Not available / text
9

SECURITY OF COMMUNICATION IN COMPUTER NETWORKS (KEY MANAGEMENT, VERIFICATION).

LU, WEN-PAI. January 1986 (has links)
This dissertation concerns investigations on two of the most important problems in establishing communication security in computer networks: (1) developing a model which precisely describes the mechanism that enforces the security policy and requirements for a secure network, and (2) designing a key management scheme for establishing a secure session for end-to-end encryption between a pair of communicants. The security mechanism attempts to ensure secure flow of information between entities assigned to different security classes in different computer systems attached to a computer communication network. The mechanism also controls the accesses to the network devices by the subjects (users and processes executed on behalf of the users). The communication security problem is formulated by using a mathematical model which precisely describes the security requirements for the network. The model integrates the notions of access control and information flow control to provide a Trusted Network Base (TNB) for the network. The demonstration of security of the network when the security mechanism is designed following the present model is given by using mathematical induction techniques. The problem of designing key management schemes for establishing end-to-end encrypted sessions between source-destination pairs when the source and the destination are on different networks interconnected via Gateways and intermediate networks is examined. In such an internet environment, the key management problem attains a high degree of complexity due to the differences in the key distribution mechanisms used in the constituent networks and the infeasibility of effecting extensive hardware and software changes to the existing networks. A hierarchical approach for key management is presented which utilizes the existing network specific protocols at the lower levels and protocols between Authentication Servers and/or Control Centers of different networks at the higher levels. Details of this approach are discussed for specific illustrative scenarios to demonstrate the implementational simplicity. A formal verification of the security of the resulting system is also conducted by an axiomatic procedure utilizing certain combinatory logic principles. This approach is general and can be used for verifying the security of any existing key management scheme.
10

A holistic approach to network security in OGSA-based grid systems

Loutsios, Demetrios January 2006 (has links)
Grid computing technologies facilitate complex scientific collaborations between globally dispersed parties, which make use of heterogeneous technologies and computing systems. However, in recent years the commercial sector has developed a growing interest in Grid technologies. Prominent Grid researchers have predicted Grids will grow into the commercial mainstream, even though its origins were in scientific research. This is much the same way as the Internet started as a vehicle for research collaboration between universities and government institutions, and grew into a technology with large commercial applications. Grids facilitate complex trust relationships between globally dispersed business partners, research groups, and non-profit organizations. Almost any dispersed “virtual organization” willing to share computing resources can make use of Grid technologies. Grid computing facilitates the networking of shared services; the inter-connection of a potentially unlimited number of computing resources within a “Grid” is possible. Grid technologies leverage a range of open standards and technologies to provide interoperability between heterogeneous computing systems. Newer Grids build on key capabilities of Web-Service technologies to provide easy and dynamic publishing and discovery of Grid resources. Due to the inter-organisational nature of Grid systems, there is a need to provide adequate security to Grid users and to Grid resources. This research proposes a framework, using a specific brokered pattern, which addresses several common Grid security challenges, which include: Providing secure and consistent cross-site Authentication and Authorization; Single-sign on capabilities to Grid users; Abstract iii; Underlying platform and runtime security, and; Grid network communications and messaging security. These Grid security challenges can be viewed as comprising two (proposed) logical layers of a Grid. These layers are: a Common Grid Layer (higher level Grid interactions), and a Local Resource Layer (Lower level technology security concerns). This research is concerned with providing a generic and holistic security framework to secure both layers. This research makes extensive use of STRIDE - an acronym for Microsoft approach to addressing security threats - as part of a holistic Grid security framework. STRIDE and key Grid related standards, such as Open Grid Service Architecture (OGSA), Web-Service Resource Framework (WS-RF), and the Globus Toolkit are used to formulate the proposed framework.

Page generated in 0.0714 seconds