• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 31
  • 21
  • 19
  • 19
  • 19
  • 17
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Hardware-assisted security: bloom cache – scalable low-overhead control flow integrity checking

Young, Vinson 21 September 2015 (has links)
Computers were not built with security in mind. As such, security has and still often takes a back seat to performance. However, in an era where there is so much sensitive data being stored, with cloud storage and huge customer databases, much has to be done to keep this data safe from intruders. Control flow hijacking attacks, stemming from a basic code injection attack to return-into-libc and other code re-use attacks, are among the most dangerous attacks. Currently available solutions, like Data execution prevention that can prevent a user from executing writable pages to prevent code injection attacks, do not have an efficient solution for protecting against code re-use attacks, which can execute valid code in a malicious order. To protect against control flow hijacking attacks, this work proposes architecture to make Control Flow Integrity, a solution that proposes to validate control flow against pre-computed control flow graph, practical. Current implementations of Control Flow Integrity have problems with code modularity, performance, or scalability, so I propose Dynamic Bloom Cache, a blocked-Bloom-filter-based approach, to solve current implementation issues.
2

Control flow speculation for distributed architectures

Ranganathan, Nitya 21 October 2009 (has links)
As transistor counts, power dissipation, and wire delays increase, the microprocessor industry is transitioning from chips containing large monolithic processors to multi-core architectures. The granularity of cores determines the mechanisms for branch prediction, instruction fetch and map, data supply, instruction execution, and completion. Accurate control flow prediction is essential for high performance processors with large instruction windows and high-bandwidth execution. This dissertation considers cores with very large granularity, such as TRIPS, as well as cores with extremely small granularity, such as TFlex, and explores control flow speculation issues in such processors. Both TRIPS and TFlex are distributed block-based architectures and require control speculation mechanisms that can work in a distributed environment while supporting efficient block-level prediction, misprediction detection, and recovery. This dissertation aims at providing efficient control flow prediction techniques for distributed block-based processors. First, we discuss simple exit predictors inspired by branch predictors and describe the design of the TRIPS prototype block predictor. Area and timing trade-offs in the predictor implementation are presented. We report the predictor misprediction rates from the prototype chip for the SPEC benchmark suite. Next, we look at the performance bottlenecks in the prototype predictor and present a detailed analysis of exit and target predictors using basic prediction components inspired from branch predictors. This study helps in understanding what types of predictors are effective for exit and target prediction. Using the results of our prediction analysis, we propose novel hardware techniques to improve the accuracy of block prediction. To understand whether exit prediction is inherently more difficult than branch prediction, we measure the correlation among branches in basic blocks and hyperblocks and examine the loss in correlation due to hyperblock construction. Finally, we propose block predictors for TFlex, a fully distributed architecture that uses composable lightweight processors. We describe various possible designs for distributed block predictors and a classification scheme for such predictors. We present results for predictors from each of the design points for distributed prediction. / text
3

Program analysis using game semantics

Sampath, Prahladavaradan January 2000 (has links)
No description available.
4

Decentralised control flow : a computational model for distributed systems

Mundy, David H. January 1988 (has links)
This thesis presents two sets of principles for the organisation of distributed computing systems. Details of models of computation based on these principles are together given, with proposals for programming languages based on each model of computation. The recursive control flow principles are based on the concept of recursive control flow computing system structuring. A recursive comprises a group of subordinate computing systems connected together by Each subordinate computing system may either be a communications medium. which a a computing system consists of a processing unit, memory some is itself a recursive component, and input/output devices, or computing components control flow system. The memory of all the computing systems within a recursive control flow computing subordinate system are arranged in a hierarchy. Using suitable addresses, any part of the hierarchy is accessible to any sequence of instructions which may be executed by the processing unit of a subordinate computing system. This rise to serious difficulties in the global accessibility gives understanding of programs written the meaning of in a programming language recursive control flow on the model of computation. based Reasoning about a particular program in isolation is difficult because of the potential interference between the execution different programs cannot be ignored . alternative principles, decentralised control flow, restrict the The accessibility of subordinate global the memory components of the computing The basis of the concept of objects forms the systems. principles. Information channels may flow along unnamed between instances of these objects, this being the only way in which one instance of an object may communicate with some other instance of an object. Reasoning particular program written in a programming language about a based on the decentralised control flow model of computation is easier since it is that there will be no interference between the guaranteed execution of different programs.
5

Dynamic warp formation : exploiting thread scheduling for efficient MIMD control flow on SIMD graphics hardware

Fung, Wilson Wai Lun 11 1900 (has links)
Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead for control hardware. Scalar threads running the same computing kernel are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One solution is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance using this approach. In this thesis, we propose dynamic warp formation and scheduling, a mechanism for more efficient SIMD branch execution on GPUs. It dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes. We show that a realistic hardware implementation of this mechanism improves performance by an average of 47% for an estimated area increase of 8%.
6

Dynamic warp formation : exploiting thread scheduling for efficient MIMD control flow on SIMD graphics hardware

Fung, Wilson Wai Lun 11 1900 (has links)
Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead for control hardware. Scalar threads running the same computing kernel are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One solution is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance using this approach. In this thesis, we propose dynamic warp formation and scheduling, a mechanism for more efficient SIMD branch execution on GPUs. It dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes. We show that a realistic hardware implementation of this mechanism improves performance by an average of 47% for an estimated area increase of 8%.
7

Dynamic warp formation : exploiting thread scheduling for efficient MIMD control flow on SIMD graphics hardware

Fung, Wilson Wai Lun 11 1900 (has links)
Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead for control hardware. Scalar threads running the same computing kernel are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One solution is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance using this approach. In this thesis, we propose dynamic warp formation and scheduling, a mechanism for more efficient SIMD branch execution on GPUs. It dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes. We show that a realistic hardware implementation of this mechanism improves performance by an average of 47% for an estimated area increase of 8%. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
8

Symbolic Interpretation of Legacy Assembly Language

Chowdhury, Pulak Kumar 18 August 2005 (has links)
<p> Many industries have legacy software systems which are definitely important to them but are however, difficult to maintain due to a lack of understanding of those systems. This occurs as a result of inadequate or inconsistent documentation. Although the costs of redesigning the system may be large, some organizations still plan to reverse engineer the software specification documents from the code to alleviate a large burden from such endeavour. This thesis provides an incremental and modular approach to create a process and tools to extract the semantics of legacy assembly code.</p> <p> Our techniques consist of static analysis and symbolic interpretation in order to reverse engineer the semantics of legacy software. We examine the case of IBM-1800 programs in detail. From the abstract model of the operational semantics of IBM-1800, we simultaneously obtain an emulator and a symbolic analysis process. Augmented with control flow information, we can use the symbolic analysis to provide complete semantics for the code sequences of interest. We can also generate Data Flow Graphs to depict the flow of data in those code segments. The whole process of extracting semantic information from the assembler codes is fully automated with only a little human intervention at the initial step.</p> <p> We use Haskell as our implementation language and its important features help us to create modular and well structured software. The literate programming documentation style in this thesis increases the readability and consistency of the implementation's documentation.</p> <p> The process and the associated tools created in this thesis are used in a large reverse engineering project, which has a goal to extract requirements specification from legacy assembly code. This project is funded jointly by Ontario Power Generation (OPG) and CITO (Communications and Information Technology Ontario).</p> / Thesis / Master of Applied Science (MASc)
9

Uma arquitetura de baixo acoplamento para execução de padrões de controle de fluxo em grades / A loosely coupled architecture to run workflow control-flow patterns in grid

Nardi, Alexandre Ricardo 27 April 2009 (has links)
O uso de padrões de workflow para controle de fluxo em aplicações de e-Science resulta em maior produtividade por parte do cientista, permitindo que se concentre em sua área de especialização. Todavia, o uso de padrões de workflow para paralelização em grades permanece uma questão em aberto. Este texto apresenta uma arquitetura de baixo acoplamento e extensível, para permitir a execução de padrões com ou sem a presença de grade, de modo transparente ao cientista. Descreve também o Padrão Junção Combinada, que atende a diversos cenários de paralelização comumente encontrados em aplicações de e-Science. Com isso, espera-se auxiliar o trabalho do cientista, oferecendo maior flexibilidade na utilização de grades e na representação de cenários de paralelização. / The use of workflow control-flow patterns in e-Science applications results in productivity improvement, allowing the scientist to concentrate in his/her own research area. However, the use of workflow control-flow patterns for execution in grids remains an opened question. This work presents a loosely coupled and extensible architecture, allowing use of patterns with or without grids, transparently to the scientist. It also describes the Combined Join Pattern, compliant to parallelization scenarios, commonly found in e-Science applications. As a result, it is expected to help the scientist tasks, giving him or her greater flexibility in the grid usage and in representing parallelization scenarios.
10

Taking Back Control: Closing the Gap Between C/C++ and Machine Semantics

Nathan H. Burow (5929538) 03 January 2019 (has links)
<div>Control-flow hijacking attacks allow adversaries to take over seemingly benign software, e.g., a web browser, and cause it to perform malicious actions, i.e., grant attackers a shell on</div><div>a system. Such control-flow hijacking attacks exploit a gap between high level language semantics and the machine language that they are compiled to. In particular, systems</div><div>software such as web browsers and servers are implemented in C/C++ which provide no runtime safety guarantees, leaving memory and type safety exclusively to programmers. Compilers are ideally situated to perform the required analysis and close the semantic gap between C/C++ and machine languages by adding instrumentation to enforce full or partial memory safety.</div><div><br></div><div><div>In unprotected C/C++, adversaries must be assumed to be able to control to the contents of any writeable memory location (arbitrary writes), and to read the contents of any readable memory location (arbitrary reads). Defenses against such attacks range from enforcing full memory safety to protecting only select information, normally code pointers to prevent control-flow hijacking attacks. We advance the state of the art for control-flow hijacking</div><div>defenses by improving the enforcement of full memory safety, as well as partial memory safety schemes for protecting code pointers.</div></div><div><br></div><div><div>We demonstrate a novel mechanism for enforcing full memory safety, which denies attackers both arbitrary reads and arbitrary writes at half the performance overhead of the</div><div>prior state of the art mechanism. Our mechanism relies on a novel metadata scheme for maintaining bounds information about memory objects. Further, we maintain the application</div><div>binary interface (ABI), support all C/C++ language features, and are mature enough to protect all of user space, and in particular libc.</div></div><div><br></div><div><div>Backwards control-flow transfers, i.e., returns, are a common target for attackers. In particular, return-oriented-programming (ROP) is a code-reuse attack technique built around corrupting return addresses. Shadow stacks prevent ROP attacks by providing partial memory safety for programs, namely integrity protecting the return address. We provide a full taxonomy of shadow stack designs, including two previously unexplored designs, and demonstrate that with compiler support shadow stacks can be deployed in practice. Further we examine the state of hardware support for integrity protected memory regions within a process’ address space. Control-Flow Integrity (CFI) is a popular technique for securing forward edges, e.g., indirect function calls, from being used for control-flow hijacking attacks. CFI is a form of partial memory safety that provides weak integrity for function pointers by restricting them to a statically determined set of values based on the program’s control-flow graph. We survey existing techniques, and quantify the protection they provide on a per callsite basis.</div><div>Building off this work, we propose a new security policy, Object Type Integrity, which provides full integrity protection for virtual table pointers on a per object basis for C++</div><div>polymorphic objects.</div></div>

Page generated in 0.0555 seconds