• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 8
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Semidefinite programming, binary codes and a graph coloring problem

Li, Chao 29 May 2015 (has links)
"Experts in information theory have long been interested in the maximal size, A(n, d), of a binary error-correcting code of length n and minimum distance d, The problem of determining A(n, d) involves both the construction of good codes and the search for good upper bounds. For quite some time now, Delsarte's linear programming approach has been the dominant approach to obtaining the strongest general purpose upper bounds on the efficiency of error-correcting codes. From 1973 forward, the linear programming bound found many applications, but there were few significant theoretical advances until Schrijver proposed a new code upper bound via semidefinite programming in 2003. Using the Terwilliger algebra, a recently introduced extension of the Bose-Mesner algebra, Schrijver formulated a new SDP strengthening of the LP approach. In this project we look at the dual solutions of the semidefinite programming bound for binary error-correcting codes. We explore the combinatorial meaning of these variables for small n and d, such as n = 4 and d = 2. To obtain information like this, we wrote a computer program with both Matlab and CVX modules to get solution of our primal SDP formulation. Our program efficiently generates the primal solutions with corresponding constraints for any n and d. We also wrote a program in C++ to parse the output of the primal SDP problem, and another Matlab script to generate the dual SDP problem, which could be used in assigning combinatorial meaning to the values given in the dual optimal solution. Our code not only computes both the primal and dual optimal variable values, but allows the researcher to display them in meaningful ways and to explore their relationship and dependence on arameters. These values are expected to be useful for later study of the combinatorial meaning of such solutions."
2

Úprava programů v Javascriptu pomocí překladače / Javascript Program Obfuscation

Kuna, Matej January 2010 (has links)
In this project I deal with basic description of source code obfuscation and size reduction techniques of programming languages. The project is primary aimed on scripting language named JavaScript and involved analysis of free obfuscation tools for this language. In the final stage I have created an application is based on theoretical scope of this project and able to obfuscate JavaScript source codes on different levels.
3

Selective software-implemented hardware fault tolerance tecnhiques to detect soft errors in processors with reduced overhead / Técnicas seletivas de tolerência a falhas em software com custo reduzido para detectar erros causados por falhas transientes em processadores

Chielle, Eduardo January 2016 (has links)
A utilização de técnicas de tolerância a falhas em software é uma forma de baixo custo para proteger processadores contra soft errors. Contudo, elas causam aumento no tempo de execução e utilização de memória. Em consequência disso, o consumo de energia também aumenta. Sistemas que operam com restrição de tempo ou energia podem ficar impossibilitados de utilizar tais técnicas. Por esse motivo, este trabalho propoe técnicas de tolerância a falhas em software com custos no desempenho e memória reduzidos e cobertura de falhas similar a técnicas presentes na literatura. Como detecção é menos custoso que correção, este trabalho foca em técnicas de detecção. Primeiramente, um conjunto de técnicas de dados baseadas em regras de generalização, chamada VAR, é apresentada. As técnicas são baseadas nesse conjunto generalizado de regras para permitir uma investigação exaustiva, em termos de confiabilidade e custos, de diferentes variações de técnicas. As regras definem como a técnica duplica o código e insere verificadores. Cada técnica usa um diferente conjunto de regras. Então, uma técnica de controle, chamada SETA, é introduzida. Comparando SETA com uma técnica estado-da-arte, SETA é 11.0% mais rápida e ocupa 10.3% menos posições de memória. As técnicas de dados mais promissoras são combinadas com a técnica de controle com o objetivo de proteger tanto os dados quanto o fluxo de controle da aplicação alvo. Para reduzir ainda mais os custos, métodos para aplicar seletivamente as técnicas propostas foram desenvolvidos. Para técnica de dados, em vez de proteger todos os registradores, somente um conjunto de registradores selecionados é protegido. O conjunto é selecionado com base em uma métrica que analisa o código e classifica os registradores por sua criticalidade. Para técnicas de controle, há duas abordagens: (1) remover verificadores de blocos básicos, e (2) seletivamente proteger blocos básicos. As técnicas e suas versões seletivas são avaliadas em termos de tempo de execução, tamanho do código, cobertura de falhas, e o Mean Work to Failure (MWTF), o qual é uma métrica que mede o compromisso entre cobertura de falhas e tempo de execução. Resultados mostram redução dos custos sem diminuição da cobertura de falhas, e para uma pequena redução na cobertura de falhas foi possível significativamente reduzir os custos. Por fim, uma vez que a avaliação de todas as possíveis combinações utilizando métodos seletivos toma muito tempo, este trabalho utiliza um método para extrapolar os resultados obtidos por simulação com o objetivo de encontrar os melhores parâmetros para a proteção seletiva e combinada de técnicas de dados e de controle que melhorem o compromisso entre confiabilidade e custos. / Software-based fault tolerance techniques are a low-cost way to protect processors against soft errors. However, they introduce significant overheads to the execution time and code size, which consequently increases the energy consumption. System operation with time or energy restrictions may not be able to make use of these techniques. For this reason, this work proposes software-based fault tolerance techniques with lower overheads and similar fault coverage to state-of-the-art software techniques. Once detection is less costly than correction, the work focuses on software-based detection techniques. Firstly, a set of data-flow techniques called VAR is proposed. The techniques are based on general building rules to allow an exhaustive assessment, in terms of reliability and overheads, of different technique variations. The rules define how the technique duplicates the code and insert checkers. Each technique uses a different set of rules. Then, a control-flow technique called SETA (Software-only Error-detection Technique using Assertions) is introduced. Comparing SETA with a state-of-the-art technique, SETA is 11.0% faster and occupies 10.3% fewer memory positions. The most promising data-flow techniques are combined with the control-flow technique in order to protect both dataflow and control-flow of the target application. To go even further with the reduction of the overheads, methods to selective apply the proposed software techniques have been developed. For the data-flow techniques, instead of protecting all registers, only a set of selected registers is protected. The set is selected based on a metric that analyzes the code and rank the registers by their criticality. For the control-flow technique, two approaches are taken: (1) removing checkers from basic blocks: all the basic blocks are protected by SETA, but only selected basic blocks have checkers inserted, and (2) selectively protecting basic blocks: only a set of basic blocks is protected. The techniques and their selective versions are evaluated in terms of execution time, code size, fault coverage, and Mean Work To Failure (MWTF), which is a metric to measure the trade-off between fault coverage and execution time. Results show that was possible to reduce the overheads without affecting the fault coverage, and for a small reduction in the fault coverage it was possible to significantly reduce the overheads. Lastly, since the evaluation of all the possible combinations for selective hardening of every application takes too much time, this work uses a method to extrapolate the results obtained by simulation in order to find the parameters for the selective combination of data and control-flow techniques that are probably the best candidates to improve the trade-off between reliability and overheads.
4

Selective software-implemented hardware fault tolerance tecnhiques to detect soft errors in processors with reduced overhead / Técnicas seletivas de tolerência a falhas em software com custo reduzido para detectar erros causados por falhas transientes em processadores

Chielle, Eduardo January 2016 (has links)
A utilização de técnicas de tolerância a falhas em software é uma forma de baixo custo para proteger processadores contra soft errors. Contudo, elas causam aumento no tempo de execução e utilização de memória. Em consequência disso, o consumo de energia também aumenta. Sistemas que operam com restrição de tempo ou energia podem ficar impossibilitados de utilizar tais técnicas. Por esse motivo, este trabalho propoe técnicas de tolerância a falhas em software com custos no desempenho e memória reduzidos e cobertura de falhas similar a técnicas presentes na literatura. Como detecção é menos custoso que correção, este trabalho foca em técnicas de detecção. Primeiramente, um conjunto de técnicas de dados baseadas em regras de generalização, chamada VAR, é apresentada. As técnicas são baseadas nesse conjunto generalizado de regras para permitir uma investigação exaustiva, em termos de confiabilidade e custos, de diferentes variações de técnicas. As regras definem como a técnica duplica o código e insere verificadores. Cada técnica usa um diferente conjunto de regras. Então, uma técnica de controle, chamada SETA, é introduzida. Comparando SETA com uma técnica estado-da-arte, SETA é 11.0% mais rápida e ocupa 10.3% menos posições de memória. As técnicas de dados mais promissoras são combinadas com a técnica de controle com o objetivo de proteger tanto os dados quanto o fluxo de controle da aplicação alvo. Para reduzir ainda mais os custos, métodos para aplicar seletivamente as técnicas propostas foram desenvolvidos. Para técnica de dados, em vez de proteger todos os registradores, somente um conjunto de registradores selecionados é protegido. O conjunto é selecionado com base em uma métrica que analisa o código e classifica os registradores por sua criticalidade. Para técnicas de controle, há duas abordagens: (1) remover verificadores de blocos básicos, e (2) seletivamente proteger blocos básicos. As técnicas e suas versões seletivas são avaliadas em termos de tempo de execução, tamanho do código, cobertura de falhas, e o Mean Work to Failure (MWTF), o qual é uma métrica que mede o compromisso entre cobertura de falhas e tempo de execução. Resultados mostram redução dos custos sem diminuição da cobertura de falhas, e para uma pequena redução na cobertura de falhas foi possível significativamente reduzir os custos. Por fim, uma vez que a avaliação de todas as possíveis combinações utilizando métodos seletivos toma muito tempo, este trabalho utiliza um método para extrapolar os resultados obtidos por simulação com o objetivo de encontrar os melhores parâmetros para a proteção seletiva e combinada de técnicas de dados e de controle que melhorem o compromisso entre confiabilidade e custos. / Software-based fault tolerance techniques are a low-cost way to protect processors against soft errors. However, they introduce significant overheads to the execution time and code size, which consequently increases the energy consumption. System operation with time or energy restrictions may not be able to make use of these techniques. For this reason, this work proposes software-based fault tolerance techniques with lower overheads and similar fault coverage to state-of-the-art software techniques. Once detection is less costly than correction, the work focuses on software-based detection techniques. Firstly, a set of data-flow techniques called VAR is proposed. The techniques are based on general building rules to allow an exhaustive assessment, in terms of reliability and overheads, of different technique variations. The rules define how the technique duplicates the code and insert checkers. Each technique uses a different set of rules. Then, a control-flow technique called SETA (Software-only Error-detection Technique using Assertions) is introduced. Comparing SETA with a state-of-the-art technique, SETA is 11.0% faster and occupies 10.3% fewer memory positions. The most promising data-flow techniques are combined with the control-flow technique in order to protect both dataflow and control-flow of the target application. To go even further with the reduction of the overheads, methods to selective apply the proposed software techniques have been developed. For the data-flow techniques, instead of protecting all registers, only a set of selected registers is protected. The set is selected based on a metric that analyzes the code and rank the registers by their criticality. For the control-flow technique, two approaches are taken: (1) removing checkers from basic blocks: all the basic blocks are protected by SETA, but only selected basic blocks have checkers inserted, and (2) selectively protecting basic blocks: only a set of basic blocks is protected. The techniques and their selective versions are evaluated in terms of execution time, code size, fault coverage, and Mean Work To Failure (MWTF), which is a metric to measure the trade-off between fault coverage and execution time. Results show that was possible to reduce the overheads without affecting the fault coverage, and for a small reduction in the fault coverage it was possible to significantly reduce the overheads. Lastly, since the evaluation of all the possible combinations for selective hardening of every application takes too much time, this work uses a method to extrapolate the results obtained by simulation in order to find the parameters for the selective combination of data and control-flow techniques that are probably the best candidates to improve the trade-off between reliability and overheads.
5

Selective software-implemented hardware fault tolerance tecnhiques to detect soft errors in processors with reduced overhead / Técnicas seletivas de tolerência a falhas em software com custo reduzido para detectar erros causados por falhas transientes em processadores

Chielle, Eduardo January 2016 (has links)
A utilização de técnicas de tolerância a falhas em software é uma forma de baixo custo para proteger processadores contra soft errors. Contudo, elas causam aumento no tempo de execução e utilização de memória. Em consequência disso, o consumo de energia também aumenta. Sistemas que operam com restrição de tempo ou energia podem ficar impossibilitados de utilizar tais técnicas. Por esse motivo, este trabalho propoe técnicas de tolerância a falhas em software com custos no desempenho e memória reduzidos e cobertura de falhas similar a técnicas presentes na literatura. Como detecção é menos custoso que correção, este trabalho foca em técnicas de detecção. Primeiramente, um conjunto de técnicas de dados baseadas em regras de generalização, chamada VAR, é apresentada. As técnicas são baseadas nesse conjunto generalizado de regras para permitir uma investigação exaustiva, em termos de confiabilidade e custos, de diferentes variações de técnicas. As regras definem como a técnica duplica o código e insere verificadores. Cada técnica usa um diferente conjunto de regras. Então, uma técnica de controle, chamada SETA, é introduzida. Comparando SETA com uma técnica estado-da-arte, SETA é 11.0% mais rápida e ocupa 10.3% menos posições de memória. As técnicas de dados mais promissoras são combinadas com a técnica de controle com o objetivo de proteger tanto os dados quanto o fluxo de controle da aplicação alvo. Para reduzir ainda mais os custos, métodos para aplicar seletivamente as técnicas propostas foram desenvolvidos. Para técnica de dados, em vez de proteger todos os registradores, somente um conjunto de registradores selecionados é protegido. O conjunto é selecionado com base em uma métrica que analisa o código e classifica os registradores por sua criticalidade. Para técnicas de controle, há duas abordagens: (1) remover verificadores de blocos básicos, e (2) seletivamente proteger blocos básicos. As técnicas e suas versões seletivas são avaliadas em termos de tempo de execução, tamanho do código, cobertura de falhas, e o Mean Work to Failure (MWTF), o qual é uma métrica que mede o compromisso entre cobertura de falhas e tempo de execução. Resultados mostram redução dos custos sem diminuição da cobertura de falhas, e para uma pequena redução na cobertura de falhas foi possível significativamente reduzir os custos. Por fim, uma vez que a avaliação de todas as possíveis combinações utilizando métodos seletivos toma muito tempo, este trabalho utiliza um método para extrapolar os resultados obtidos por simulação com o objetivo de encontrar os melhores parâmetros para a proteção seletiva e combinada de técnicas de dados e de controle que melhorem o compromisso entre confiabilidade e custos. / Software-based fault tolerance techniques are a low-cost way to protect processors against soft errors. However, they introduce significant overheads to the execution time and code size, which consequently increases the energy consumption. System operation with time or energy restrictions may not be able to make use of these techniques. For this reason, this work proposes software-based fault tolerance techniques with lower overheads and similar fault coverage to state-of-the-art software techniques. Once detection is less costly than correction, the work focuses on software-based detection techniques. Firstly, a set of data-flow techniques called VAR is proposed. The techniques are based on general building rules to allow an exhaustive assessment, in terms of reliability and overheads, of different technique variations. The rules define how the technique duplicates the code and insert checkers. Each technique uses a different set of rules. Then, a control-flow technique called SETA (Software-only Error-detection Technique using Assertions) is introduced. Comparing SETA with a state-of-the-art technique, SETA is 11.0% faster and occupies 10.3% fewer memory positions. The most promising data-flow techniques are combined with the control-flow technique in order to protect both dataflow and control-flow of the target application. To go even further with the reduction of the overheads, methods to selective apply the proposed software techniques have been developed. For the data-flow techniques, instead of protecting all registers, only a set of selected registers is protected. The set is selected based on a metric that analyzes the code and rank the registers by their criticality. For the control-flow technique, two approaches are taken: (1) removing checkers from basic blocks: all the basic blocks are protected by SETA, but only selected basic blocks have checkers inserted, and (2) selectively protecting basic blocks: only a set of basic blocks is protected. The techniques and their selective versions are evaluated in terms of execution time, code size, fault coverage, and Mean Work To Failure (MWTF), which is a metric to measure the trade-off between fault coverage and execution time. Results show that was possible to reduce the overheads without affecting the fault coverage, and for a small reduction in the fault coverage it was possible to significantly reduce the overheads. Lastly, since the evaluation of all the possible combinations for selective hardening of every application takes too much time, this work uses a method to extrapolate the results obtained by simulation in order to find the parameters for the selective combination of data and control-flow techniques that are probably the best candidates to improve the trade-off between reliability and overheads.
6

TAMING IRREGULAR CONTROL-FLOW WITH TARGETED COMPILER TRANSFORMATIONS

Charitha Saumya Gusthinna Waduge (15460634) 15 May 2023 (has links)
<p>    </p> <p>Irregular control-flow structures like deeply nested conditional branches are common in real-world software applications. Improving the performance and efficiency of such programs is often challenging because it is difficult to analyze and optimize programs with irregular control flow. We observe that real-world programs contain similar or identical computations within different code paths of the conditional branches. Compilers can merge similar code to improve performance or code size. However, existing compiler optimizations like code hoisting/sinking, and tail merging do not fully exploit this opportunity. We propose a new technique called Control-Flow Melding (CFM) that can merge similar code sequences at the control-flow region level. We evaluate CFM in two applications. First, we show that CFM reduces the control divergence in GPU programs and improves the performance. Second, we apply CFM to CPU programs and show its effectiveness in reducing code size without sacrificing performance. In the next part of this dissertation, we investigate how CFM can be extended to improve dynamic test generation techniques like Dynamic Symbolic Execution (DSE). DSE suffers from path explosion problem when many conditional branches are present in the program. We propose a non-semantics-preserving branch elimination transformation called CFM-SE that reduces the number of symbolic branches in a program. We also provide a framework for detecting and reasoning about false positive bugs that might be added to the program by non-semantics-preserving transformations like CFM-SE. Furthermore, we evaluate CFM-SE on real-world applications and show its effectiveness in improving DSE performance and code coverage. </p>
7

Performance-Aware Code Size Optimization of Generic Functions through Automatic Implementation of Dynamic Dispatch / Prestandamedveten kodstorleksoptimering av generiska funktioner genom automatisk tillämpning av dynamic dispatch

Härnqvist, Ivar January 2022 (has links)
Monomorphization and dynamic dispatch are two common techniques for implementing polymorphism in statically typed programming languages. Function templates in C++ use the former technique to enable algorithms written as generic functions to be efficiently reused with multiple different data types by producing a separate function instantiation for each invocation that uses a unique permutation of argument types. This avoids the overhead of indirection associated with dynamic dispatch and allows the generated code of each instantiation to be optimized by the compiler for its specific concrete types, which typically yields great improvements in runtime performance over any dynamic approach. The disadvantage of this implementation, compared to the type-erased generics found in many other programming languages, is that careless over-use of templates with many different argument types can lead to an excessive amount of redundant code being generated for the same function. This increase in code size may increase the binary size of the final program and reduce the amount of useful code that can fit into the processor's instruction cache during execution, reducing code locality and thereby potentially reducing performance. Monomorphization can also increase compilation time due to the increase in generated code that needs to be compiled and optimized. This thesis presents a heuristic-based approach to generic programming that allows function templates to be automatically converted to use dynamic dispatch in scenarios where the resulting negative impact on runtime performance is predicted to be low. The thesis project includes the development of a proof of concept plugin for the Clang compiler frontend that can be used to compile existing C++ projects with the conversions applied. The design of a heuristic function for determining whether a given function template should use monomorphization or dynamic dispatch based on statically known metrics is proposed based on the results of an experiment. This heuristic is shown to achieve a small general improvement in program size across a set of open-source C++ projects when they are compiled using the plugin. The key findings from the experiment and from the development of the plugin are summarized with a general strategy for how the approach can be integrated into the design of future programming languages to promote more extensive use of generic programming in performance-sensitive code while avoiding regressions in program size and compilation time.
8

Elliptic Curve Cryptography for Lightweight Applications.

Hitchcock, Yvonne Roslyn January 2003 (has links)
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.

Page generated in 0.2853 seconds