• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 375
  • 40
  • 38
  • 26
  • 23
  • 12
  • 8
  • 8
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 698
  • 698
  • 298
  • 274
  • 156
  • 147
  • 112
  • 108
  • 107
  • 105
  • 100
  • 100
  • 87
  • 86
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

AMMP-EXTN: A User Privacy and Collaboration Control Framework for a Multi-User Collaboratory Virtual Reality System

Ma, Wenjun 01 October 2007 (has links)
In this thesis, we propose a new design of privacy and session control for improving a collaborative molecular modeling CVR system AMMP-VIS [1]. The design mainly addresses the issue of competing user interests and privacy protection coordination. Based on our investigation of AMMP-VIS, we propose a four-level access control structure for collaborative sessions and dynamic action priority specification for manipulations on shared molecular models. Our design allows a single user to participate in multiple simultaneous sessions. Moreover, a messaging system with text chatting and system broadcasting functionality is included. A 2D user interface [2] for easy command invocation is developed in Python. Two other key aspects of system implementation, the collaboration Central deployment and the 2D GUI for control are also discussed. Finally, we describe our system evaluation plan which is based on an improved cognitive walkthrough and heuristic evaluation as well as statistical usage data.
282

Unsupervised anomaly detection framework for multiple-connection based network intrusions

Lu, Wei 04 December 2009 (has links)
In this dissertation, we propose an effective and efficient online unsupervised anomaly detection framework. The framework consists of new anomalousness metrics, named IP Weight, and a new hybrid clustering algorithm, named I-means. IP Weight metrics provide measures of anomalousness of IP packet flows on networks. A simple classification of network intrusions consists of distinguishing between single-connection based attacks and multiple-connection based attacks. The IP weight metrics proposed in this work characterize specifically multiple-connection based attacks. The definition of specific metrics for single-connection based attacks is left for future work. The I-means algorithm combines mixture resolving, a genetic algorithm automatically estimating the optimal number of clusters for a set of data, and the k-means algorithm for clustering. Three sets of experiments are conducted to evaluate our new unsupervised anomaly detection framework. The first experiment empirically validates that IP Weight metrics reduce dimensions of feature space characterizing IP packets at a level comparable with the principal component analysis technique. The second experiment is an offline evaluation based on 1998 DARPA intrusion detection dataset. In the offline evaluation, we compare our framework with three other unsupervised anomaly detection approaches, namely, plain k-means clustering, univariate outlier detection and multivariate outlier detection. Evaluation results show that the detection framework based on I-means yields the highest detection rate with a low false alarm rate. Specifically, it detects 18 types of attacks out of a total of 19 multiple-connection based attack types. The third experiment is an online evaluation in a live networking environment. The evaluation result not only confirms the detection effectiveness observed with the DARPA dataset, but also shows a good runtime efficiency, with response times falling within few seconds ranges.
283

Managing privacy in peer-to-peer distribution of clinical documents

Obry, Christina 10 March 2010 (has links)
Security and privacy are two of the most important aspects of any medical information mediation system. Governments have established privacy legislations to prevent abuse of patients' personal data. These legislations require organizations to obtain consents prior to information usage and exchange. The consents are defined as policies. However, policies are often not precise and adequate enough to address all possible eventualities and exceptions. Unanticipated emergency cases may cause conflicts between a patient's right for privacy and the need to receive treatments from well-informed care-givers. In these situations. the patient's safety should have precedence. Therefore, care-givers should have the ability to override the patient's privacy policies on behalf of the patient. This thesis presents a mechanism, which restricts access to sensitive, medical data based on defined policies, but which also allows overriding the policies in emergency cases. The overriding process is monitored and audited in order to prevent misuse.
284

A Statistically Rigorous Evaluation of the Cascade Bloom Filter for Distributed Access Enforcement in Role-Based Access Control (RBAC) Systems

Zitouni, Toufik January 2010 (has links)
We consider the distributed access enforcement problem for Role-Based Access Control (RBAC) systems. Such enforcement has become important with RBAC’s increasing adoption, and the proliferation of data that needs to be protected. Our particular interest is in the evaluation of a new data structure that has recently been proposed for enforcement: the Cascade Bloom Filter. The Cascade Bloom Filter is an extension of the Bloom filter, and provides for time- and space-efficient encodings of sets. We compare the Cascade Bloom Filter to the Bloom Filter, and another approach called Authorization Recycling that has been proposed for distributed access enforcement in RBAC. One of the challenges we address is the lack of a benchmark: we propose and justify a benchmark for the assessment. Also, we adopt a statistically rigorous approach for empirical assessment from recent work. We present our results for time- and space-efficiency based on our benchmark. We demonstrate that, of the three data structures that we consider, the Cascade Bloom Filter scales the best with the number of RBAC sessions from the standpoints of time- and space-efficiency.
285

Energy Efficient Protocols for Delay Tolerant Networks

Choi, Bong Jun January 2011 (has links)
The delay tolerant networks (DTNs) is characterized by frequent disconnections and long delays of links among devices due to mobility, sparse deployment of devices, attacks, and noise, etc. Considerable research efforts have been devoted recently to DTNs enabling communications between network entities with intermittent connectivity. Unfortunately, mobile devices have limited energy capacity, and the fundamental problem is that traditional power-saving mechanisms are designed assuming well connected networks. Due to much larger inter-contact durations than contact durations, devices spend most of their life time in the neighbor discovery, and centralized power-saving strategies are difficult. Consequently, mobile devices consume a significant amount of energy in the neighbor discovery, rather than in infrequent data transfers. Therefore, distributed energy efficient neighbor discovery protocols for DTNs are essential to minimize the degradation of network connectivity and maximize the benefits from mobility. In this thesis, we develop sleep scheduling protocols in the medium access control (MAC) layer that are adaptive and distributed under different clock synchronization conditions: synchronous, asynchronous, and semi-asynchronous. In addition, we propose a distributed clock synchronization protocol to mitigate the clock synchronization problem in DTNs. Our research accomplishments are briefly outlined as follows: Firstly, we design an adaptive exponential beacon (AEB) protocol. By exploiting the trend of contact availability, beacon periods are independently adjusted by each device and optimized using the distribution of contact durations. The AEB protocol significantly reduces energy consumption while maintaining comparable packet delivery delay and delivery ratio. Secondly, we design two asynchronous clock based sleep scheduling (ACDS) protocols. Based on the fact that global clock synchronization is difficult to achieve in general, predetermined patterns of sleep schedules are constructed using hierarchical arrangements of cyclic difference sets such that devices independently selecting different duty cycle lengths are still guaranteed to have overlapping awake intervals with other devices within the communication range. Thirdly, we design a distributed semi-asynchronous sleep scheduling (DSA) protocol. Although the synchronization error is unavoidable, some level of clock accuracy may be possible for many practical scenarios. The sleep schedules are constructed to guarantee contacts among devices having loosely synchronized clocks, and parameters are optimized using the distribution of synchronization error. We also define conditions for which the proposed semi-asynchronous protocol outperforms existing asynchronous sleep scheduling protocols. Lastly, we design a distributed clock synchronization (DCS) protocol. The proposed protocol considers asynchronous and long delayed connections when exchanging relative clock information among nodes. As a result, smaller synchronization error achieved by the proposed protocol allows more accurate timing information and renders neighbor discovery more energy efficient. The designed protocols improve the lifetime of mobile devices in DTNs by means of energy efficient neighbor discoveries that reduce the energy waste caused by idle listening problems.
286

Throughput Optimization in Multi-hop Wireless Networks with Random Access

Uddin, Md. Forkan January 2011 (has links)
This research investigates cross-layer design in multi-hop wireless networks with random access. Due to the complexity of the problem, we study cross-layer design with a simple slotted ALOHA medium access control (MAC) protocol without considering any network dynamics. Firstly, we study the optimal joint configuration of routing and MAC parameters in slotted ALOHA based wireless networks under a signal to interference plus noise ratio based physical interference model. We formulate a joint routing and MAC (JRM) optimization problem under a saturation assumption to determine the optimal max-min throughput of the flows and the optimal configuration of routing and MAC parameters. The JRM optimization problem is a complex non-convex problem. We solve it by an iterated optimal search (IOS) technique and validate our model via simulation. Via numerical and simulation results, we show that JRM design provides a significant throughput gain over a default configuration in a slotted ALOHA based wireless network. Next, we study the optimal joint configuration of routing, MAC, and network coding in wireless mesh networks using an XOR-like network coding without opportunistic listening. We reformulate the JRM optimization problem to include the simple network coding and obtain a more complex non-convex problem. Similar to the JRM problem, we solve it by the IOS technique and validate our model via simulation. Numerical and simulation results for different networks illustrate that (i) the jointly optimized configuration provides a remarkable throughput gain with respect to a default configuration in a slotted ALOHA system with network coding and (ii) the throughput gain obtained by the simple network coding is significant, especially at low transmission power, i.e., the gain obtained by jointly optimizing routing, MAC, and network coding is significant even when compared to an optimized network without network coding. We then show that, in a mesh network, a significant fraction of the throughput gain for network coding can be obtained by limiting network coding to nodes directly adjacent to the gateway. Next, we propose simple heuristics to configure slotted ALOHA based wireless networks without and with network coding. These heuristics are extensively evaluated via simulation and found to be very efficient. We also formulate problems to jointly configure not only the routing and MAC parameters but also the transmission rate parameters in multi-rate slotted ALOHA systems without and with network coding. We compare the performance of multi-rate and single rate systems via numerical results. We model the energy consumption in terms of slotted ALOHA system parameters. We found out that the energy consumption for various cross-layer systems, i.e., single rate and multi-rate slotted ALOHA systems without and with network coding, are very close.
287

Secure Schemes for Semi-Trusted Environment

Tassanaviboon, Anuchart January 2011 (has links)
In recent years, two distributed system technologies have emerged: Peer-to-Peer (P2P) and cloud computing. For the former, the computers at the edge of networks share their resources, i.e., computing power, data, and network bandwidth, and obtain resources from other peers in the same community. Although this technology enables efficiency, scalability, and availability at low cost of ownership and maintenance, peers defined as ``like each other'' are not wholly controlled by one another or by the same authority. In addition, resources and functionality in P2P systems depend on peer contribution, i.e., storing, computing, routing, etc. These specific aspects raise security concerns and attacks that many researchers try to address. Most solutions proposed by researchers rely on public-key certificates from an external Certificate Authority (CA) or a centralized Public Key Infrastructure (PKI). However, both CA and PKI are contradictory to fully decentralized P2P systems that are self-organizing and infrastructureless. To avoid this contradiction, this thesis concerns the provisioning of public-key certificates in P2P communities, which is a crucial foundation for securing P2P functionalities and applications. We create a framework, named the Self-Organizing and Self-Healing CA group (SOHCG), that can provide certificates without a centralized Trusted Third Party (TTP). In our framework, a CA group is initialized in a Content Addressable Network (CAN) by trusted bootstrap nodes and then grows to a mature state by itself. Based on our group management policies and predefined parameters, the membership in a CA group is dynamic and has a uniform distribution over the P2P community; the size of a CA group is kept to a level that balances performance and acceptable security. The muticast group over an underlying CA group is constructed to reduce communication and computation overhead from collaboration among CA members. To maintain the quality of the CA group, the honest majority of members is maintained by a Byzantine agreement algorithm, and all shares are refreshed gradually and continuously. Our CA framework has been designed to meet all design goals, being self-organizing, self-healing, scalable, resilient, and efficient. A security analysis shows that the framework enables key registration and certificate issue with resistance to external attacks, i.e., node impersonation, man-in-the-middle (MITM), Sybil, and a specific form of DoS, as well as internal attacks, i.e., CA functionality interference and CA group subversion. Cloud computing is the most recent evolution of distributed systems that enable shared resources like P2P systems. Unlike P2P systems, cloud entities are asymmetric in roles like client-server models, i.e., end-users collaborate with Cloud Service Providers (CSPs) through Web interfaces or Web portals. Cloud computing is a combination of technologies, e.g., SOA services, virtualization, grid computing, clustering, P2P overlay networks, management automation, and the Internet, etc. With these technologies, cloud computing can deliver services with specific properties: on-demand self-service, broad network access, resource pooling, rapid elasticity, measured services. However, theses core technologies have their own intrinsic vulnerabilities, so they induce specific attacks to cloud computing. Furthermore, since public clouds are a form of outsourcing, the security of users' resources must rely on CSPs' administration. This situation raises two crucial security concerns for users: locking data into a single CSP and losing control of resources. Providing inter-operations between Application Service Providers (ASPs) and untrusted cloud storage is a countermeasure that can protect users from lock-in with a vendor and losing control of their data. To meet the above challenge, this thesis proposed a new authorization scheme, named OAuth and ABE based authorization (AAuth), that is built on the OAuth standard and leverages Ciphertext-Policy Attribute Based Encryption (CP-ABE) and ElGamal-like masks to construct ABE-based tokens. The ABE-tokens can facilitate a user-centric approach, end-to-end encryption and end-to-end authorization in semi-trusted clouds. With these facilities, owners can take control of their data resting in semi-untrusted clouds and safely use services from unknown ASPs. To this end, our scheme divides the attribute universe into two disjointed sets: confined attributes defined by owners to limit the lifetime and scope of tokens and descriptive attributes defined by authority(s) to certify the characteristic of ASPs. Security analysis shows that AAuth maintains the same security level as the original CP-ABE scheme and protects users from exposing their credentials to ASP, as OAuth does. Moreover, AAuth can resist both external and internal attacks, including untrusted cloud storage. Since most cryptographic functions are delegated from owners to CSPs, AAuth gains computing power from clouds. In our extensive simulation, AAuth's greater overhead was balanced by greater security than OAuth's. Furthermore, our scheme works seamlessly with storage providers by retaining the providers' APIs in the usual way.
288

Towards securing networks of resource constrained devices: a study of cryptographic primitives and key distribution schemes

Chan, Kevin Sean 25 August 2008 (has links)
Wireless networks afford many benefits compared to wired networks in terms of their usability in dynamic situations, mobility of networked devices, and accessibility of hazardous environments. The devices used in these networks are generally assumed to be limited in resources such as energy, memory, communications range, and computational ability. Operating in remote or hostile environments, this places them in danger of being compromised by some malicious entity. This work addresses these issues to increase the security of these networks while still maintaining acceptable levels of networking performance and resource usage. We investigate new methods for data encryption on personal wireless hand-held devices. An important consideration for resource-constrained devices is the processing required to encrypt data for transmission or for secure storage. Significant latency from data encryption diminishes the viability of these security services for hand-held devices. Also, increased processing demands require additional energy for each device, where both energy and processing capability are limited. Therefore, one area of interest for hand-held wireless devices is being able to provide data encryption while minimizing the processing and energy overhead as a cost to provide such a security service. We study the security of a wavelet-based cryptosystem and consider its viability for use in hand-held devices. This thesis also considers the performance of wireless sensor networks in the presence of an adversary. The sensor nodes used in these networks are limited in available energy, processing capability and transmission range. Despite these resource constraints and expected malicious attacks on the network, these networks require widespread, highly-reliable communications. Maintaining satisfactory levels of network performance and security between entities is an important goal toward ensuring the successful and accurate completion of desired sensing tasks. However, the resource-constrained nature of the sensor nodes used in these applications provides challenges in meeting these networking and security requirements. We consider link-compromise attacks and node-spoofing attacks on wireless sensor networks, and we consider the performance of various key predistribution schemes applied to these networks. New key predistribution techniques to improve the security of wireless sensor networks are proposed.
289

Engineering Trusted Location Services and Context-aware Augmentations for Network Authorization Models

Wullems, Christian John January 2005 (has links)
Context-aware computing has been a rapidly growing research area, however its uses have been predominantly targeted at pervasive applications for smart spaces such as smart homes and workplaces. This research has investigated the use of location and other context data in access control policy, with the purpose of augmenting existing IP and application-layer security to provide fine-grained access control and effective enforcement of security policy. The use of location and other context data for security purposes requires that the technologies and methods used for acquiring the context data are trusted. This thesis begins with the description of a framework for the analysis of location systems for use in security services and critical infrastructure. This analysis classifies cooperative locations systems by their modes of operation and the common primitives they are composed of. Common location systems are analyzed for inherent security flaws and limitations based on the vulnerability assessment of location system primitives and the taxonomy of known attacks. An efficient scheme for supporting trusted differential GPS corrections is proposed, such that DGPS vulnerabilities that have been identified are mitigated. The proposal augments the existing broadcast messaging protocol with a number of new messages facilitating origin authentication and integrity of broadcast corrections for marine vessels. A proposal for a trusted location system based on GSM is presented, in which a model for tamper resistant location determination using GSM signaling is designed. A protocol for association of a user to a cell phone is proposed and demonstrated in a framework for both Web and Wireless Application Protocol (WAP) applications. After introducing the security issues of existing location systems and a trusted location system proposal, the focus of the thesis changes to the use of location data in authorization and access control processes. This is considered at both the IP-layer and the application-layer. For IP-layer security, a proposal for location proximity-based network packet filtering in IEEE 802.11 Wireless LANs is presented. This proposal details an architecture that extends the Linux netfilter system to support proximity-based packet filtering, using methods of transparent location determination through the application of a pathloss model to raw signal measurements. Our investigation of application-layer security resulted in the establishment of a set of requirements for the use of contextual information in application level authorization. Existing network authentication protocols and access control mechanisms are analyzed for their ability to fulfill these requirements and their suitability in facilitating context-aware authorization. The result is the design and development of a new context-aware authorization architecture, using the proposed modifications to Role-based Access Control (RBAC). One of the distinguishing characteristics of the proposed architecture is its ability to handle authorization with context-transparency, and provide support for real-time granting and revocation of permissions. During the investigation of the context-aware authorization architecture, other security contexts in addition to host location were found to be useful in application level authorization. These included network topology between the host and application server, the security of the host and the host execution environment. Details of the prototype implementation, performance results, and context acquisition services are presented.
290

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.

Page generated in 0.039 seconds