Spelling suggestions: "subject:"computer engineering|computer science"" "subject:"computer engineering|coomputer science""
1 |
Improving SAT Solvers by Exploiting Empirical Characteristics of CDCLOh, Chanseok 03 March 2016 (has links)
<p> The Boolean Satisfiability Problem (SAT) is a canonical decision problem originally shown to be NP-complete in Cook's seminal work on the theory of computational complexity. The SAT problem is one of several computational tasks identified by researchers as core problems in computer science. The existence of an efficient decision procedure for SAT would imply P = NP. However, numerous algorithms and techniques for solving the SAT problem have been proposed in various forms in practical settings. Highly efficient solvers are now actively being used, either directly or as a core engine of a larger system, to solve real-world problems that arise from many application domains. These state-of-the-art solvers use the Davis-Putnam-Logemann-Loveland (DPLL) algorithm extended with Conflict-Driven Clause Learning (CDCL). Due to the practical importance of SAT, building a fast SAT solver can have a huge impact on current and prospective applications. The ultimate contribution of this thesis is improving the state of the art of CDCL by understanding and exploiting the empirical characteristics of how CDCL works on real-world problems. The first part of the thesis shows empirically that most of the unsatisfiable real-world problems solvable by CDCL have a refutation proof with near-constant width for the great portion of the proof. Based on this observation, the thesis provides an unconventional perspective that CDCL solvers can solve real-world problems very efficiently and often more efficiently just by maintaining a small set of certain classes of learned clauses. The next part of the thesis focuses on understanding the inherently different natures of satisfiable and unsatisfiable problems and their implications on the empirical workings of CDCL. We examine the varying degree of roles and effects of crucial elements of CDCL based on the satisfiability status of a problem. Ultimately, we propose effective techniques to exploit the new insights about the different natures of proving satisfiability and unsatisfiability to improve the state of the art of CDCL. In the last part of the thesis, we present a reference solver that incorporates all the techniques described in the thesis. The design of the presented solver emphasizes minimality in implementation while guaranteeing state-of-the-art performance. Several versions of the reference solver have demonstrated top-notch performance, earning several medals in the annual SAT competitive events. The minimal spirit of the reference solver shows that a simple CDCL framework alone can still be made competitive with state-of-the-art solvers that implement sophisticated techniques outside the CDCL framework.</p>
|
2 |
Enforcing Security Policies On GPU Computing Through The Use Of Aspect-Oriented Programming TechniquesAlBassam, Bader 03 August 2016 (has links)
<p> This thesis presents a new security policy enforcer designed for securing parallel computation on CUDA GPUs. We show how the very features that make a GPGPU desirable have already been utilized in existing exploits, fortifying the need for security protections on a GPGPU. An aspect weaver was designed for CUDA with the goal of utilizing aspect-oriented programming for security policy enforcement. Empirical testing verified the ability of our aspect weaver to enforce various policies. Furthermore, a performance analysis was performed to demonstrate that using this policy enforcer provides no significant performance impact over manual insertion of policy code. Finally, future research goals are presented through a plan of work. We hope that this thesis will provide for long term research goals to guide the field of GPU security.</p>
|
3 |
Productive Design of Extensible On-Chip Memory HierarchiesCook, Henry Michael 02 September 2016 (has links)
<p> As Moore’s Law slows and process scaling yields only small returns, computer architecture and design are poised to undergo a renaissance. This thesis brings the productivity of modern software tools to bear on the design of future energy-efficient hardware architectures. </p><p> In particular, it targets one of the most difficult design tasks in the hardware domain: Coherent hierarchies of on-chip caches. I have extended the capabilities of Chisel, a new hardware description language, by providing libraries for hardware developers to use to describe the configuration and behavior of such memory hierarchies, with a focus on the cache coherence protocols that work behind the scenes to preserve their abstraction of global shared memory. I discuss how the methods I provide enable productive and extensible memory hierarchy design by separating the concerns of different hierarchy components, and I explain how this forms the basis for a generative approach to agile hardware design. </p><p> This thesis describes a general framework for context-dependent parameterization of any hardware generator, defines a specific set of Chisel libraries for generating extensible cache-coherent memory hierarchies, and provides a methodology for decomposing high-level descriptions of cache coherence protocols into controller-localized, object-oriented transactions. </p><p> This methodology has been used to generate the memory hierarchies of a lineage of RISC-V chips fabricated as part of the ASPIRE Lab’s investigations into application-specific processor design.</p>
|
4 |
Embracing security in all phases of the software development life cycle| A Delphi studyDeschene, Marie 10 November 2016 (has links)
<p> Software is omnipresent from refrigerators to financial institutions. In addition to software that defines cyber system functionality, there is an increasing amount of digitized data on cyber systems. This increasing amount of easily available data has prompted a rise in attacks on cyber systems by globally organized attackers. The solution (which has been proposed by multiple authors) is to plan security into software products throughout all software development phases. This approach constitutes a change in the software development life cycle (SDLC) process. Acceptance and approval from all software development stakeholders is needed to make this type of cultural paradigm shift. A Delphi study into what would encourage software development stakeholders to accept the need for security during software development was performed. Results of the three-round Delphi study revealed education (formal and informal) would increase software development stakeholder understanding of the risks of insecure software and educate stakeholders on how to plan and write more secure software. The Delphi study also revealed that mitigation of time and resource constraints on software projects is needed to encourage software teams to embrace the need and efforts necessary to include security in all phases of the SDLC. </p>
|
5 |
Progression and Edge Intelligence Framework for IoT SystemsHuang, Zhenqiu 26 October 2016 (has links)
<p> This thesis studies the issues of building and managing future Internet of Things (IoT) systems. IoT systems consist of distributed components with services for sensing, processing, and controlling through devices deployed in our living environment as part of the global cyber-physical ecosystem. </p><p> Systems with perpetually running IoT devices may use a lot of energy. One challenge is implementing good management policies for energy saving. In addition, a large scale of devices may be deployed in wide geographical areas through low bandwidth wireless communication networks. This brings the challenge of congfiuring a large number of duplicated applications with low latency in a scalable manner. Finally, intelligent IoT applications, such as occupancy prediction and activity recognition, depend on analyzing user and event patterns from historical data. In order to achieve real-time interaction between humans and things, reliable yet real-time analytic support should be included to leverage the interplay and complementary roles of edge and cloud computing. </p><p> In this dissertation, I address the above issues from the service oriented point of view. Service oriented architecture (SOA) provides the integration and management flexibility using the abstraction of services deployed on devices. We have designed the WuKong IoT middleware to facilitate connectivity, deployment, and run-time management of IoT applications. </p><p> For energy efficient mapping, this thesis presents an energy saving methodology for co- locating several services on the same physical device in order to reduce the computing and communication energy. In a multi-hop network, the service co-location problem is formulated as a quadratic programming problem. I propose a reduction method that reduces it to the integer programming problem. In a single hop network, the service co-location problem can be modeled as the Maximum Weighted Independent Set (MWIS) problem. I design algorithm to transform a service flow to a co-location graph. Then, known heuristic algorithms to find the maximum independent set, which is the basis for making service co-location decisions, are applied to the co-location graph. </p><p> For low latency scalable deployment, I propose a region-based hierarchical management structure. A congestion zone that covers multiple regions is identified. The problem of deploying a large number of copies of a flow-based program (FBP) in a congestion zone is modeled as a network traffic congestion problem. Then, the problem of mapping in a congestion zone is modeled as an Integer Quadratic Constrained Programming (IQCP) problem, which is proved to be a NP-hard problem. Given that, an approximation algorithm based on LP relaxation and an efficient service relocating heuristic algorithm are designed for reducing the computation complexity. For each congestion zone, the algorithm will perform global optimized mapping for multiple regions, and then request multiple deployment delegators for reprogramming individual devices. </p><p> Finally, with the growing adoption of IoT applications, dedicated and single-purpose devices are giving way to smart, adaptive devices with rich capabilities using a platform or API, collecting and analyzing data, and making their own decisions. To facilitate building intelligent applications in IoT, I have implemented the edge framework for supporting reliable streaming analytics on edge devices. In addition, a progression framework is built to achieve the self-management capability of applications in IoT. A progressive architecture and a programming paradigm for bridging the service oriented application with the power of big data on the cloud are designed in the framework. In this thesis, I present the detailed design of the progression framework, which incorporates the above features for building scalable management of IoT systems through a flexible middleware.</p>
|
6 |
Locating an Autonomous Car Using the Kalman Filter to Reduce NoiseHema Balaji, Nagarathna 06 March 2019 (has links)
<p> With growing use of autonomous vehicles and similar systems, it is critical for those systems to be reliable, accurate, and error free. Sensor data are of vital importance for ensuring the fidelity of navigation and decision-making ability of autonomous systems. Several existing models have achieved accuracy in the sensor data, but they are all application specific and have limited applicability for future systems. </p><p> This paper proposes a method for reducing errors in sensor data through use of sensor fusion on the Kalman filter. The proposed model is intended to be versatile and to adapt to the needs of any robotic vehicle with only minor modifications. The model is a basic framework for normalizing the speed of autonomous robots. Moreover, it is capable of ensuring smooth operation of individual autonomous robots and facilitates co-operative applications. The model achieves a framework that is more reliable, accurate, and error free, compared to other existing models, thereby enabling its implementation on similar robotic applications. This model can be expanded for use in other applications with only minimal changes; it therefore promises to revolutionize the way that human beings use, interact with, and benefit from autonomous devices in day-to-day activities.</p><p>
|
7 |
Monitoring vehicle entry and departure through location-based servicesChopra, Varun Nikhil G. 10 December 2015 (has links)
<p> Shipping ports and terminals are usually very busy with high traffic duration times. The high trafficked areas of shipping terminals at ports often contribute to the high density traffic volume which affects the community and the port by possibly extending commuters' travel time, delaying shipments of goods, and potentially being a safety hazard. Location-based services would be able to measure the time a vehicle enters and exits the terminals at the port. Location-based services such as geofencing would help determine entry and exit times of vehicles. These services would be used in hopes of determining an efficient way to reduce traffic conditions by notifying terminals of entry and departure times of vehicles. By gathering travel times of vehicles, a process could be developed by representatives of the terminals at the port to more efficiently operate. A system which consists of two architectures is built to gather adequate travel times. The first system is a server side application with REST endpoints exposed and the second application is a client side application which consumes those endpoints. This study provides an analysis and implementation on how location-based services establishes a means to measure entry and exit times of vehicles moving through geofenced gates.</p>
|
8 |
Beneath the Attack SurfaceMowery, Keaton 18 August 2015 (has links)
<p> Computer systems are often analyzed as purely virtual artifacts, a collection of software operating on a Platonic ideal of a computer. When software is executed, it runs on actual hardware: an increasingly complex web of analog physical components and processes, cleverly strung together to present an illusion of pure computation. When an abstract software system is combined with individual hardware instances to form functioning systems, the overall behavior varies subtly with the hardware. These minor variations can change the security and privacy guarantees of the entire system, in both beneficial and harmful ways. We examine several such security effects in this dissertation. </p><p> First, we look at the fingerprinting capability of JavaScript and HTML5: when invoking existing features of modern browsers, such as JavaScript execution and 3-D graphics, how are the results affected by underlying hardware, and how distinctive is the resulting fingerprint?</p><p> Second, we discuss AES side channel timing attacks, a technique to extract information from AES encryption running on hardware. We present several reasons why we were unable to reproduce this attack against modern hardware and a modern browser.</p><p> Third, we examine positive uses of hardware variance: namely, seeding Linux's pseudorandom number generator at kernel initialization time with true entropy gathered during early boot. We examine the utility of these techniques on a variety of embedded devices, and give estimates for the amount of entropy each can generate.</p><p> Lastly, we evaluate a cyberphysical system: one which combines physical processes and analog sensors with software control and interpretation. Specifically, we examine the Rapiscan Secure~1000 backscatter X-ray full-body scanner, a device for looking under a scan subject's clothing, discovering any contraband secreted about their person. We present a full security analysis of this system, including its hardware, software, and underlying physics, and show how an adaptive, motivated adversary can completely subvert the scan to smuggle contraband, such as knives, firearms, and plastic explosives, past a Secure~1000 checkpoint. These attacks are entirely based upon understanding the physical processes and sensors which underlie this cyberphysical system, and involve adjusting the contraband's location and shape until it simply disappears.</p>
|
9 |
Efficient ray tracing architecturesSpjut, Josef Bo 22 October 2015 (has links)
<p> This dissertation presents computer architecture designs that are efficient for ray tracing based rendering algorithms. The primary observation is that ray tracing maps better to independent thread issue hardware designs than it does to dependent thread and data designs used in most commercial architectures. While the independent thread issue causes extra overhead in the fetch and issue parts of the pipeline, the number of computation resources required can be reduced through the sharing of less frequently used execution units. Furthermore, since all the threads run a single program on multiple data (SPMD), thread processors can share instruction and data caches. Ray tracing needs read-only access to the scene data during each frame, so caches can be optimized for reading, and traditional cache coherence protocols are unnecessary for maintaining coherent memory access. The resultant image exists as a write only frame buffer, allowing memory writes to avoid the cache entirely, preventing cache pollution and increasing the performance of smaller caches. </p><p> Commercial real-time rendering systems lean heavily on high-performance graphics processing units (GPU) that use the rasterization and z-buffer algorithms for rendering. A single pass of rasterization throws out much of the global scene information by streaming the surface data that a ray tracer keeps resident in memory. As a result, ray tracing is more naturally able to support rendering effects involving global information, such as shadows, reflections, refractions and camera lens effects. Rasterization has a time complexity of approximately <i> O</i>(<i>N log</i>(<i>P</i>)) where <i>N</i> is the number of primitive polygons and <i>P</i> is the number of pixels in the image. Ray tracing, in contrast, has a time complexity of <i> O</i>(<i>P log</i>(<i>N</i>)) making ray tracing scale better to large scenes with many primitive polygons, allowing for increased surface detail. Finally, once the number of pixels reaches its limit, ray tracing should exceed the performance of rasterization by allowing the number of objects to increase with less of a penalty on performance.</p>
|
10 |
Heterogeneity and Density Aware Design of Computing SystemsArora, Manish 03 August 2018 (has links)
<p> The number of programmable cores that are available in systems continues to increase with advances in device scaling, integration, and iterative improvements. Today, systems are not just integrating more cores, but also integrating a variety of different types of processing cores, resulting in dense heterogeneous systems. However, important questions remain about the design methodology for dense heterogeneous systems. This thesis seeks to address these questions. </p><p> One typical methodology for heterogeneous system design is to comprise systems by using parts of homogeneous systems. Another commonly used technique to enable density is replication. However, these design methodologies are “heterogeneous system oblivious” and “density oblivious”. The components of the system are not aware or optimized for the heterogeneous system they would become a part of. Nor are they aware of the existence of other replicated components. This thesis shows that “heterogeneous system oblivious” and “density oblivious” design methodologies result in inefficient systems. This thesis proposes heterogeneity and density aware approaches to designing dense heterogeneous architectures.</p><p>
|
Page generated in 0.1396 seconds