• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • Tagged with
  • 25
  • 25
  • 25
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Side channel attack resistance| Migrating towards high level methods

Borowczak, Mike 21 December 2013 (has links)
<p> Our world is moving towards ubiquitous networked computing with unstoppable momentum. With technology available at our every finger tip, we expect to connect quickly, cheaply, and securely on the sleekest devices. While the past four decades of design automation research has focused on making integrated circuits smaller, cheaper and quicker the past decade has drawn more attention towards security. Though security within the scope of computing is a large domain, the focus of this work is on the elimination of computationally based power byproducts from high-level device models down to physical designs and implementations The scope of this dissertation is within the analysis, attack and protection of power based side channels. Research in the field concentrates on determining, masking and/or eliminating the sources of data dependent information leakage within designs. While a significant amount of research is allocated to reducing this leakage at low levels of abstraction, significantly less research effort has gone into higher levels of abstraction. This dissertation focuses on both ends of the design spectrum while motivating the future need for hierarchical side channel resistance metrics for hardware designs. Current low level solutions focus on creating perfectly balanced standard cells through various straight-forward logic styles. Each of these existing logic styles, while enhancing side channel resistance by reducing the channels' variance, come at significant design expense in terms of area footprint, power consumption, delay and even logic style structure. The first portion of this proposal introduces a universal cell based on a dual multiplexer, implemented using a pass-transistor logic which approaches and exceeds some standard cell cost benchmarks. The proposed cell and circuit level methods shows significant improvements in security metrics over existing cells and approaches standard CMOS cell and circuit performance by reducing area, power consumption and delay. While most low level works stop at the cell level, this work also investigates the impact of environmental factors on security. On the other end of the design spectrum, existing secure architecture and algorithm research attempts to mask side channels through random noise, variable timing, instruction reordering and other similar methods. These methods attempt to obfuscate the primary source of information with side channels. Unfortunately, in most cases, the techniques are still susceptible to attack - of those with promise, most are algorithm specific. This dissertation approaches high-level security by eliminating the relationship between high level side channel models and the side channels themselves. This work discusses two different solutions targeting architecture level protection. The first, deals with the protection of Finite State Machines, while the seconds deals with protection of a class of cryptographic algorithms using Feedback Shift Registers. This dissertation includes methods for reducing the power overhead of any FSM circuit (secured or not). The solutions proposed herein render potential side channel models moot by eliminating or reducing the model's data dependent variability. Designers unwilling to compromise on a doubling of area can include some sub-optimal security to their devices. </p>
2

General Purpose MCMC Sampling for Bayesian Model Averaging

Boyles, Levi Beinarauskas 26 September 2014 (has links)
<p> In this thesis we explore the problem of inference for Bayesian model averaging. Many popular topics in Bayesian analysis, such as Bayesian nonparametrics, can be cast as model averaging problems. Model averaging problems offer unique difficulties for inference, as the parameter space is not fixed, and may be infinite. As such, there is little existing work on general purpose MCMC algorithms in this area. We introduce a new MCMC sampler, which we call Retrospective Jump sampling, that is suitable for general purpose model averaging. In the development of Retrospective Jump, some practical issues arise in the need for a MCMC sampler for finite dimensions that is suitable for multimodal target densities; we introduce Refractive Sampling as a sampler suitable in this regard. Finally, we evaluate Retrospective Jump on several model averaging and Bayesian nonparametric problems, and develop a novel latent feature model with hierarchical column structure which uses Retrospective Jump for inference.</p>
3

Random testing of open source C compilers

Yang, Xuejun 23 June 2015 (has links)
<p> Compilers are indispensable tools to developers. We expect them to be correct. However, compiler correctness is very hard to be reasoned about. This can be partly explained by the daunting complexity of compilers. </p><p> In this dissertation, I will explain how we constructed a random program generator, Csmith, and used it to find hundreds of bugs in strong open source compilers such as the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). The success of Csmith depends on its ability of being expressive and unambiguous at the same time. Csmith is composed of a code generator and a GTAV (Generation-Time Analysis and Validation) engine. They work interactively to produce expressive yet unambiguous random programs. The expressiveness of Csmith is attributed to the code generator, while the unambiguity is assured by GTAV. GTAV performs program analyses, such as points-to analysis and effect analysis, efficiently to avoid ambiguities caused by undefined behaviors or unspecified behaviors. </p><p> During our 4.25 years of testing, Csmith has found over 450 bugs in the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). We analyzed the bugs by putting them into different categories, studying the root causes, finding their locations in compilers' source code, and evaluating their importance. We believe analysis results are useful to future random testers, as well as compiler writers/users.</p>
4

Opportunities for near data computing in MapReduce workloads

Pugsley, Seth Hintze 25 June 2015 (has links)
<p> In-memory big data applications are growing in popularity, including in-memory versions of the MapReduce framework. The move away from disk-based datasets shifts the performance bottleneck from slow disk accesses to memory bandwidth. MapReduce is a data-parallel application, and is therefore amenable to being executed on as many parallel processors as possible, with each processor requiring high amounts of memory bandwidth. We propose using Near Data Computing (NDC) as a means to develop systems that are optimized for in-memory MapReduce workloads, offering high compute parallelism and even higher memory bandwidth. This dissertation explores three different implementations and styles of NDC to improve MapReduce execution. First, we use 3D-stacked memory+logic devices to process the Map phase on compute elements in close proximity to database splits. Second, we attempt to replicate the performance characteristics of the 3D-stacked NDC using only commodity memory and inexpensive processors to improve performance of both Map and Reduce phases. Finally, we incorporate fixed-function hardware accelerators to improve sorting performance within the Map phase. This dissertation shows that it is possible to improve in-memory MapReduce performance by potentially two orders of magnitude by designing system and memory architectures that are specifically tailored to that end.</p>
5

Fast modular exponentiation using residue domain representation| A hardware implementation and analysis

Nguyen, Christopher Dinh 01 March 2014 (has links)
<p> Using modular exponentiation as an application, we engineered on FPGA fabric and analyzed the first implementation of two arithmetic algorithms in Reduced-Precision Residue Number Systems (RP-RNS): the partial-reconstruction algorithm and quotient-first scaling algorithm. Residue number systems (RNS) provide an alternative representation to the binary system for computation. They offer full parallel computation for addition, subtraction, and multiplication. However, base extension, division, and sign detection become harder operations. Phatak's RP-RNS uses a time-memory trade-off to achieve O(lg N) running time for base extension and scaling, where N is the bit-length of the operands, compared with Kawamura's Cox-Rower architecture and its derivatives, which appear to take O(N) steps and therefore O(N) delay to the best of our knowledge. We implemented the fully parallel RP-RNS architecture based on Phatak's description and architecture diagrams. Our design decisions included distributing the lookup tables among each channel, removing the adder trees, and removing the parallel table access thus trading size for speed. In retrospect, we should have hosted the tables in memory off the FPGA. We measured the FPGA utilization, storage size, and cycle counts. The data we present, though less than optimal, confirms the theoretical trends calculated by Phatak. FPGA utilization grows proportional K log(K) where K is the number of hardware channels. Storage grows proportional to O(N</p><p>3 lg lg N). When using Phatak's recommendations,cycle count grows proportional to O(lg N). Our contributions include documentation of our design, architecture, and implementation; a detailed testing methodology; and performance data based on our implementation to enable others to replicate our implementation and findings.</p>
6

PLC code vulnerabilities through SCADA systems

Valentine, Sidney E. 15 June 2013 (has links)
<p> Supervisory Control and Data Acquisition (SCADA) systems are widely used in automated manufacturing and in all areas of our nation's infrastructure. Applications range from chemical processes and water treatment facilities to oil and gas production and electric power generation and distribution. Current research on SCADA system security focuses on the primary SCADA components and targets network centric attacks. Security risks via attacks against the peripheral devices such as the Programmable Logic Controllers (PLCs) have not been sufficiently addressed. Our research results address the need to develop PLC applications that are correct, safe and secure. This research provides an analysis of software safety and security threats. We develop countermeasures that are compatible with the existing PLC technologies. We study both intentional and unintentional software errors and propose methods to prevent them. The main contributions of this dissertation are: 1). Develop a taxonomy of software errors and attacks in ladder logic 2). Model ladder logic vulnerabilities 3). Develop security design patterns to avoid software vulnerabilities and incorrect practices 4). Implement a proof of concept static analysis tool which detects the vulnerabilities in the PLC code and recommend corresponding design patterns.</p>
7

Result Distribution in Big Data Systems

Cheelangi, Madhusudan 09 August 2013 (has links)
<p> We are building a Big Data Management System (BDMS) called <b>AsterixDB </b> at UCI. Since AsterixDB is designed to operate on large volumes of data, the results for its queries can be potentially very large, and AsterixDB is also designed to operate under high concurency workloads. As a result, we need a specialized mechanism to manage these large volumes of query results and deliver them to the clients. In this thesis, we present an architecture and an implementation of a new result distribution framework that is capable of handling large volumes of results under high concurency workloads. We present the various components of this result distribution framework and show how they interact with each other to manage large volumes of query results and deliver them to clients. We also discuss various result distribution policies that are possible with our framework and compare their performance through experiments. </p><p> We have implemented a REST-like HTTP client interface on top of the result distribution framework to allow clients to submit queries and obtain their results. This client interface provides two modes for clients to choose from to read their query results: synchronous mode and asynchronous mode. In synchronous mode, query results are delivered to a client as a direct response to its query within the same request-response cycle. In asynchronous mode, a query handle is returned instead to the client as a response to its query. The client can store the handle and send another request later, including the query handle, to read the result for the query whenever it wants. The architectural support for these two modes is also described in this thesis. We believe that the result distribution framework, combined with this client interface, successfully meets the result management demands of AsterixDB. </p>
8

A Framework for Quality of Service and Fault Management in Service-Oriented Architecture

Zhang, Jing 20 August 2013 (has links)
<p>Service-Oriented Architecture (SOA) provides a powerful yet flexible paradigm for integrating distributed services into business processes to perform complex functionalities. However, the flexibility and environmental uncertainties bring difficulties to system performance management. In this dissertation, a quality of service (QoS) management framework is designed and implemented to support reliable service delivery in SOA. The QoS management framework covers runtime process performance monitoring, faulty service diagnosis and process recovery. During runtime, the QoS management system provides a mechanism to detect performance issues, identify root cause(s) of problems, and repair a process by replacing faulty services. </p><p> To reduce the burden from monitoring all services, only a set of the most informative services are monitored at runtime. Several monitor selection algorithms are designed for wisely selecting monitoring locations. Three diagnosis algorithms, including Bayesian network (BN) diagnosis, dependency matrix based (DM) diagnosis, and a hybrid diagnosis, are designed for root cause identification. DM diagnosis does not require process execution history and has a lower time complexity than BN. However, BN diagnosis usually achieves a better diagnosis accuracy. The hybrid diagnosis integrates DM and BN diagnosis to get a good diagnosis result while reduces a large portion of the diagnosis cost in BN diagnosis. Moreover, heuristic strategies can be used in hybrid diagnosis to further improve its diagnosis efficiency. </p><p> We have implemented a prototype of the QoS and fault management framework in the Llama middleware. The thesis presents the design and implementation of the diagnosis engine, the adaptation manager (for process reconfiguration) in Llama. Diagnosis engine identifies root cause services and triggers the adaptation manager, which decides the solution of service replacement. System performance is studied by using realistic services deployed on networked servers. Both simulation result and system performance study show that our monitoring, diagnosis and recovery approaches are practical and efficient. </p>
9

On the Modular Verification and Design of Firewalls

Bhattacharya, Hrishikesh 20 September 2013 (has links)
<p> Firewalls, packet filters placed at the boundary of a network in order to screen incoming packets of traffic (and discard any undesirable packets), are a prominent component of network security. In this dissertation, we make several contributions to the study of firewalls.</p><p> 1. Current algorithms for verifying the correctness of firewall policies use <i>O</i>(<i>n<sup>d</sup></i>) space, where <i>n</i> is the number of rules in the firewall (several thousand) and <i>d</i> the number of fields in a rule (about five). We develop a fast probabilistic firewall verification algorithm, which runs in time and space <i>O</i>(<i>nd</i>), and determines whether a firewall <i> F</i> satisfies a property <i>P.</i> The algorithm is provably correct in several interesting cases&mdash;notably, for every instance where it states that <i>F</i> does not satisfy <i>P</i>&mdash;and the overall probability of error is extremely small, of the order of .005%. </p><p> 2. As firewalls are often security-critical systems, it may be necessary to verify the correctness of a firewall with no possibility of error, so there is still a need for a fast deterministic firewall verifier. In this dissertation, we present a deterministic firewall verification algorithm that uses only <i> O</i>(<i>nd</i>) space.</p><p> 3. In addition to correctness, optimizing firewall performance is an important issue, as slow-running firewalls can be targeted by denial-of-service attacks. We demonstrate in this dissertation that in fact, there is a strong connection between firewall verification and detection of redundant rules; an algorithm for one can be readily adapted to the other task. We suggest that our algorithms for firewall verification can be used for firewall optimization also.</p><p> 4. In order to help design correct and efficient firewalls, we suggest two metrics for firewall complexity, and demonstrate how to design firewalls as a battery of simple firewall modules rather than as a monolithic sequence of rules. We also demonstrate how to convert an existing monolithic firewall into a modular firewall. We propose that modular design can make firewalls easy to design and easy to understand.</p><p> Thus, this dissertation covers all stages in the life cycle of a firewall&mdash;design, testing and verification, and analysis&mdash;and makes contributions to the current state of the art in each of these fields.</p>
10

Diagnosing performance changes in distributed sytems by comparing requesting request flows

Sambasivan, Raja Raman 12 November 2013 (has links)
<p>Diagnosing performance problems in modern datacenters and distributed systems is challenging, as the root cause could be contained in any one of the system's numerous components or, worse, could be a result of interactions among them. As distributed systems continue to increase in complexity, diagnosis tasks will only become more challenging. There is a need for a new class of diagnosis techniques capable of helping developers address problems in these distributed environments. </p><p> As a step toward satisfying this need, this dissertation proposes a novel technique, called <i>request-flow comparison,</i> for automatically localizing the sources of <i>performance changes</i> from the myriad potential culprits in a distributed system to just a few potential ones. Request-flow comparison works by contrasting the workflow of how individual requests are serviced within and among every component of the distributed system between two periods: a non-problem period and a problem period. By identifying and ranking performance-affecting changes, request-flow comparison provides developers with promising starting points for their diagnosis efforts. Request workflows are obtained with less than 1% overhead via use of recently developed end-to-end tracing techniques. </p><p> To demonstrate the utility of request-flow comparison in various distributed systems, this dissertation describes its implementation in a tool called Spectroscope and describes how Spectroscope was used to diagnose real, previously unsolved problems in the Ursa Minor distributed storage service and in select Google services. It also explores request-flow comparison's applicability to the Hadoop File System. Via a 26-person user study, it identifies effective visualizations for presenting request-flow comparison's results and further demonstrates that request-flow comparison helps developers quickly identify starting points for diagnosis. This dissertation also distills design choices that will maximize an end-to-end tracing infrastructure's utility for diagnosis tasks and other use cases. </p>

Page generated in 0.1913 seconds