• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 374
  • 40
  • 38
  • 26
  • 23
  • 12
  • 8
  • 8
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 697
  • 697
  • 298
  • 274
  • 156
  • 147
  • 112
  • 108
  • 107
  • 104
  • 100
  • 100
  • 87
  • 86
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Performance and control of CSMA wireless networks. / CUHK electronic theses & dissertations collection

January 2010 (has links)
Motivated by the fact that the contention graph associated with ICN is a Markov random field (MRF) with respect to the probability distribution of its system states, and that the belief propagation algorithm (BP) is an efficient way to solve "inference" problems in graphical models such as MRF, we study how to apply BP algorithms to the analysis and control of CSMA wireless networks. We investigate three applications: (1) computation of link throughputs given link access intensities; (2) computation of link access intensities required to meet target link throughputs; and (3) optimization of network utility via the control of link access intensities. We show that BP solves the three problems with exact results in tree networks and has manageable computation errors in a network with loopy contention graph. In particular, we show how a generalized version of BP, GBP, can be designed to solve the three problems above with higher accuracy. Importantly, we show how the BP and GBP algorithms can be implemented in a distributed manner, making them useful in practical CSMA network operation. / The above studies focus on computation and control of "equilibrium" link throughputs. Besides throughputs, an important performance measure in CSMA networks is the propensity for starvation. In this thesis, we show that links in CSMA wireless networks are particularly susceptible to "temporal" starvation. Specifically, certain links may have good equilibrium throughputs, yet they can still receive no throughput for extended periods from time to time. We develop a "trap theory" to analyze temporal throughput fluctuations. The trap theory serves two functions. First, it allows us to derive new mathematical results that shed light on the transient behavior of CSMA networks. Second, we can develop automated analytical tools for computing the "degrees of starvation" for CSMA networks to aid network design. We believe that the ability to identify and characterize temporal starvation as established in this thesis will serve as an important first step toward the design of effective remedies for it. / This thesis investigates the performance and control of CSMA wireless networks. To this end, an analytical model of CSMA wireless networks that captures the essence of their operation is important. We propose an Ideal CSMA Network (ICN) model to characterize the dynamic of the interactions and dependency of links in CSMA wireless networks. This model allows us to address various issues related to performance and control of CSMA networks. / We show that the throughput distributions of links in ICN can be computed from a continuous-time Markov chain and are insensitive to the distributions of the transmission time (packet duration) and the backoff countdown time in the CSMA MAC protocol given the ratio of their means rho, referred to as the access intensity. An outcome of the ICN model is a Back-of-the-Envelope (BoE) approximate computation method that allows us to bypass complicated stochastic analysis to compute link throughputs in many network configurations quickly. The BoE computation method emerges from ICN in the limit rho → infinity. Our results indicate that BoE is a good approximation technique for modest-size networks such as those typically seen in 802.11 deployments. Beyond serving as the foundation for BoE, the theoretical framework of ICN is also a foundation for understanding and optimization of large CSMA networks. / Kai, Caihong. / Adviser: Soung Chang Liew. / Source: Dissertation Abstracts International, Volume: 73-03, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 180-183). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
232

On tracing attackers of distributed denial-of-service attack through distributed approaches. / CUHK electronic theses & dissertations collection

January 2007 (has links)
For the macroscopic traceback problem, we propose an algorithm, which leverages the well-known Chandy-Lamport's distributed snapshot algorithm, so that a set of border routers of the ISPs can correctly gather statistics in a coordinated fashion. The victim site can then deduce the local traffic intensities of all the participating routers. Given the collected statistics, we provide a method for the victim site to locate the attackers who sent out dominating flows of packets. Our finding shows that the proposed methodology can pinpoint the location of the attackers in a short period of time. / In the second part of the thesis, we study a well-known technique against the microscopic traceback problem. The probabilistic packet marking (PPM for short) algorithm by Savage et al. has attracted the most attention in contributing the idea of IP traceback. The most interesting point of this IP traceback approach is that it allows routers to encode certain information on the attack packets based on a pre-determined probability. Upon receiving a sufficient number of marked packets, the victim (or a data collection node) can construct the set of paths the attack packets traversed (or the attack graph), and hence the victim can obtain the locations of the attackers. In this thesis, we present a discrete-time Markov chain model that calculates the precise number of marked packets required to construct the attack graph. / The denial-of-service attack has been a pressing problem in recent years. Denial-of-service defense research has blossomed into one of the main streams in network security. Various techniques such as the pushback message, the ICMP traceback, and the packet filtering techniques are the remarkable results from this active field of research. / The focus of this thesis is to study and devise efficient and practical algorithms to tackle the flood-based distributed denial-of-service attacks (flood-based DDoS attack for short), and we aim to trace every location of the attacker. In this thesis, we propose a revolutionary, divide-and-conquer trace-back methodology. Tracing back the attackers on a global scale is always a difficult and tedious task. Alternatively, we suggest that one should first identify Internet service providers (ISPs) that contribute to the flood-based DDoS attack by using a macroscopic traceback approach . After the concerned ISPs have been found, one can narrow the traceback problem down, and then the attackers can be located by using a microscopic traceback approach. / Though the PPM algorithm is a desirable algorithm that tackles the microscopic traceback problem, the PPM algorithm is not perfect as its termination condition is not well-defined in the literature. More importantly, without a proper termination condition, the traceback results could be wrong. In this thesis, we provide a precise termination condition for the PPM algorithm. Based on the precise termination condition, we devise a new algorithm named the rectified probabilistic packet marking algorithm (RPPM algorithm for short). The most significant merit of the RPPM algorithm is that when the algorithm terminates, it guarantees that the constructed attack graph is correct with a specified level of confidence. Our finding shows that the RPPM algorithm can guarantee the correctness of the constructed attack graph under different probabilities that the routers mark the attack packets and different structures of the network graphs. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means to enhance the reliability of the PPM algorithm. / Wong Tsz Yeung. / "September 2007." / Adviser: Man Hon Wong. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4867. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 176-185). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
233

IaaS-cloud security enhancement : an intelligent attribute-based access control model and implementation

Al-Amri, Shadha M. S. January 2017 (has links)
The cloud computing paradigm introduces an efficient utilisation of huge computing resources by multiple users with minimal expense and deployment effort compared to traditional computing facilities. Although cloud computing has incredible benefits, some governments and enterprises remain hesitant to transfer their computing technology to the cloud as a consequence of the associated security challenges. Security is, therefore, a significant factor in cloud computing adoption. Cloud services consist of three layers: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud computing services are accessed through network connections and utilised by multi-users who can share the resources through virtualisation technology. Accordingly, an efficient access control system is crucial to prevent unauthorised access. This thesis mainly investigates the IaaS security enhancement from an access control point of view.
234

Interaction Testing, Fault Location, and Anonymous Attribute-Based Authorization

January 2019 (has links)
abstract: This dissertation studies three classes of combinatorial arrays with practical applications in testing, measurement, and security. Covering arrays are widely studied in software and hardware testing to indicate the presence of faulty interactions. Locating arrays extend covering arrays to achieve identification of the interactions causing a fault by requiring additional conditions on how interactions are covered in rows. This dissertation introduces a new class, the anonymizing arrays, to guarantee a degree of anonymity by bounding the probability a particular row is identified by the interaction presented. Similarities among these arrays lead to common algorithmic techniques for their construction which this dissertation explores. Differences arising from their application domains lead to the unique features of each class, requiring tailoring the techniques to the specifics of each problem. One contribution of this work is a conditional expectation algorithm to build covering arrays via an intermediate combinatorial object. Conditional expectation efficiently finds intermediate-sized arrays that are particularly useful as ingredients for additional recursive algorithms. A cut-and-paste method creates large arrays from small ingredients. Performing transformations on the copies makes further improvements by reducing redundancy in the composed arrays and leads to fewer rows. This work contains the first algorithm for constructing locating arrays for general values of $d$ and $t$. A randomized computational search algorithmic framework verifies if a candidate array is $(\bar{d},t)$-locating by partitioning the search space and performs random resampling if a candidate fails. Algorithmic parameters determine which columns to resample and when to add additional rows to the candidate array. Additionally, analysis is conducted on the performance of the algorithmic parameters to provide guidance on how to tune parameters to prioritize speed, accuracy, or a combination of both. This work proposes anonymizing arrays as a class related to covering arrays with a higher coverage requirement and constraints. The algorithms for covering and locating arrays are tailored to anonymizing array construction. An additional property, homogeneity, is introduced to meet the needs of attribute-based authorization. Two metrics, local and global homogeneity, are designed to compare anonymizing arrays with the same parameters. Finally, a post-optimization approach reduces the homogeneity of an anonymizing array. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
235

Understanding and Addressing Collaboration Challenges for the Effective Use of Multi-User CAD

French, David James 01 March 2016 (has links)
Multi-user computer-aided design (CAD) is an emerging technology that promises to facilitate collaboration, enhance product quality, and reduce product development lead times by allowing multiple engineers to work on the same design at the same time. The BYU site of the NSF Center for e-Design has developed advanced multi-user CAD prototypes that have demonstrated the feasibility and value of this technology. Despite the possibilities that this software opens up for enhanced collaboration, there are now a new variety of challenges and opportunities to understand and address. For multi-user CAD to be used effectively in a modern engineering environment, it is necessary to understand and address both human and technical collaboration challenges. The purpose of this dissertation is to understand and address these challenges. Two studies were performed to better understand the human side of engineering collaboration: (1) engineers from multiple companies were interviewed to assess the collaboration challenges they experience, and (2) players of the multi-player game Minecraft were surveyed and studied to understand how a multi-user environment affects design collaboration. Methods were also developed to address two important technical challenges in multi-user CAD: (1) a method for detecting undo conflicts, and (2) additional methods for administering data access. This research addresses some of the important human and technical collaboration challenges in multi-user CAD. It enhances our understanding of collaboration challenges in engineering industry and how multi-user CAD will help address some of those challenges. It also enhances our understanding of how a multi-user design environment will affect design collaboration. The method developed for detecting conflicts that occur during local undo in multi-user CAD can be used to block conflicts from occurring and provide the user with some information about the cause of the conflict so they can collaborate to resolve it. The methods developed for administering data access in multi-user CAD will help protect against unauthorized access to sensitive data.
236

MULTIHIERARCHICAL DOCUMENTS AND FINE-GRAINED ACCESS CONTROL

Moore, Neil 01 January 2012 (has links)
This work presents new models and algorithms for creating, modifying, and controlling access to complex text. The digitization of texts opens new opportunities for preservation, access, and analysis, but at the same time raises questions regarding how to represent and collaboratively edit such texts. Two issues of particular interest are modelling the relationships of markup (annotations) in complex texts, and controlling the creation and modification of those texts. This work addresses and connects these issues, with emphasis on data modelling, algorithms, and computational complexity; and contributes new results in these areas of research. Although hierarchical models of text and markup are common, complex texts often exhibit layers of overlapping structure that are best described by multihierarchical markup. We develop a new model of multihierarchical markup, the globally ordered GODDAG, that combines features of both graph- and range-based models of markup, allowing documents to be unambiguously serialized. We describe extensions to the XPath query language to support globally ordered GODDAGs, provide semantics for a set of update operations on this structure, and provide algorithms for converting between two different representations of the globally ordered GODDAG. Managing the collaborative editing of documents can require restricting the types of changes different editors may make, while not altogether restricting their access to the document. Fine-grained access control allows precisely these kinds of restrictions on the operations that a user is or is not permitted to perform on a document. We describe a rule-based model of fine-grained access control for updates of hierarchical documents, and in this context analyze the document generation problem: determining whether a document could have been created without violating a particular access control policy. We show that this problem is undecidable in the general case and provide computational complexity bounds for a number of restricted variants of the problem. Finally, we extend our fine-grained access control model from hierarchical to multihierarchical documents. We provide semantics for fine-grained access control policies that control splice-in, splice-out, and rename operations on globally ordered GODDAGs, and show that the multihierarchical version of the document generation problem remains undecidable.
237

Design and Performance Evaluation of a New Spatial Reuse FireWire Protocol

Chandramohan, Vijay 19 September 2003 (has links)
New generations of video surveillance systems are expected to possess a large-scale network of intelligent video cameras with built-in image processing capabilities. These systems need to be tethered for reasons of bandwidth and power requirements. To support economical installation of video cameras and to manage the huge volume of information flow in these networks, there is a need for new shared-medium daisy-chained physical and medium access control (bus arbitration) layer communication protocols. This thesis describes the design principles of Spatial reuse FireWire Protocol (SFP), a novel request/grant bus arbitration protocol, architected for an acyclic daisy-chained network topology. SFP is a new extension of the IEEE 1394b FireWire architecture. SFP preserves the simple repeat path functionality of FireWire while offering two significant advantages: 1) SFP supports concurrent data transmissions over disjoint segments of the network (spatial reuse of bandwidth), which increases the effective throughput and 2) SFP provides support for priority traffic, which is necessary to handle real-time applications (like packet video), and mission critical applications (like event notifications between cameras) that have strict delay and jitter constraints. The delay and throughput performance of FireWire and SFP were evaluated using discrete-event queuing simulation models built with the CSIM-18 simulation library. Simulation results show that for a homogeneous traffic pattern SFP improves upon the throughput of IEEE 1394b by a factor of 2. For a traffic pattern typical of video surveillance applications, throughput increases by a factor of 7. Simulation results demonstrate that IEEE 1394b asynchronous stream based packet transactions offer better delay performance than isochronous transactions for variable bit rate video like MPEG-2 and MPEG-4. SFP extends this observation by supporting priority traffic. QoS for packet video is provided in SFP by mapping individual asynchronous stream packets to the three priority classes.
238

Privacy-aware Use of Accountability Evidence

Reuben, Jenni January 2017 (has links)
This thesis deals with the evidence that enable accountability, the privacy risks involved in using them and a privacy-aware solution to the problem of unauthorized evidence disclosure.  Legal means to protect privacy of an individual is anchored on the data protection perspective i.e., on the responsible collection and use of personal data. Accountability plays a crucial role in such legal privacy frameworks for assuring an individual’s privacy. In the European context, accountability principle is pervasive in the measures that are mandated by the General Data Protection Regulation. In general, these measures are technically achieved through automated privacy audits. System traces that record the system activities are the essential inputs to those automated audits. Nevertheless, the traces that enable accountability are themselves subject to privacy risks, because in most cases, they inform about processing of the personal data. Therefore, ensuring the privacy of the accountability traces is equally important as ensuring the privacy of the personal data. However, by and large, research involving accountability traces is concerned with storage, interoperability and analytics challenges rather than on the privacy implications involved in processing them. This dissertation focuses on both the application of accountability evidence such as in the automated privacy audits and the privacy aware use of them. The overall aim of the thesis is to provide a conceptual understanding of the privacy compliance research domain and to contribute to the solutions that promote privacy-aware use of the traces that enable accountability. To address the first part of the objective, a systematic study of existing body of knowledge on automated privacy compliance is conducted. As a result, the state-of-the-art is conceptualized as taxonomies. The second part of the objective is accomplished through two results; first, a systematic understanding of the privacy challenges involved in processing of the system traces is obtained, second, a model for privacy aware access restrictions are proposed and formalized in order to prevent illegitimate access to the system traces. Access to accountability traces such as provenance are required for automatic fulfillment of accountability obligations, but they themselves contain personally identifiable information, hence in this thesis we provide a solution to prevent unauthorized access to the provenance traces. / This thesis deals with the evidence that enables accountability, the privacy risks involved in using it and proposes a privacy-aware solution for preventing unauthorized evidence disclosure. Accountability plays a crucial role in the legal privacy frameworks for assuring individuals’ privacy.  In the European context, accountability principle is pervasive in the measures that are mandated by the General Data Protection Regulation. In general, these measures are technically achieved through automated privacy audits. Traces that record the system activities are the essential inputs to those audits. Nevertheless, such traces that enable accountability are themselves subject to privacy risks, because in most cases, they inform about the processing of the personal data. Therefore, ensuring the privacy of the traces is equally important as ensuring the privacy of the personal data. The aim of the thesis is to provide a conceptual understanding of the automated privacy compliance research and to contribute to the solutions that promote privacy-aware use of the accountability traces. This is achieved in this dissertation through a systematic study of the existing body of knowledge in automated privacy compliance, a systematic analysis of the privacy challenges involved in processing the traces and a proposal of a privacy-aware access control model for preventing illegitimate access to the traces.
239

Models for authorization and conflict resolution

Ruan, Chun, University of Western Sydney, College of Science, Technology and Environment, School of Computing and Information Technology January 2003 (has links)
Access control is a significant issue in any secure computer system. Authorization models provide a formalism and framework for specifying and evaluating access control policies that determine how access is granted and delegated among particular users. The aim of this dissertation is to investigate flexible decentralized authorization model supporting authorization delegation, both positive and negative authorization, and conflict resolution. A graph based authorization framework is proposed which can support authorization delegations and both positive and negative authorizations. In particular, it is shown that the existing conflict resolution methods are limited when applied to decentralized authorization models and cyclic authorizations can even lead to undesirable situations. A new conflict resolution policy is then proposed, which can support well controlled delegation by giving predecessors higher priorities along the delegation path. The thesis provides a formal description of the proposed model and detailed descriptions of algorithms to implement it. The model is represented using labelled digraphs, which provide a formal basis for proving the semantic correctness of the model. A weighted graph based model is presented which allows grantors to further express degrees of certainties about their granting of authorizations. The work is further extended to consider more complex domains where subjects, objects and access rights are hierarchically structured and authorization inheritance along the hierarchies taken into account. A precise semantics is given which is based on stable model semantics, and, several important properties of delegatable authorization programs investigated. The framework provides users a reasonable method to express complex security policy. To address the many situations in which users may need to be granted or delegated authorizations for a limited period of time, a temporal decentralized authorization model is proposed in which temporal authorization delegations and negations are allowable. Proper semantic properties are further investigated. Finally, as an application, the thesis shows how the proposed authorization model can be used in a e-consent system on health data. A system architecture for e-consent is presented and different types of e-consent models discussed. The proposed model is shown to provide users a good framework for representing and evaluating these models. / Doctor of Philosphy (PhD)
240

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.

Page generated in 0.1467 seconds