• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 78
  • 78
  • 30
  • 21
  • 19
  • 17
  • 17
  • 16
  • 13
  • 12
  • 11
  • 10
  • 10
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Computing component specifications from global system requirements / Beräkning av komponentspecifikationer från globala systemkrav

Björkman, Carl January 2017 (has links)
If we have a program with strict control flow security requirements and want to ensure system requirements by verifying properties of said program, but part of the code base is in the form of a plug-in or third party library which we do not have access to at the time of verification, the procedure presented in this thesis can be used to generate the requirements needed for the plug-ins or third party libraries that they would have to fulfil in order for the final product to pass the given system requirements. This thesis builds upon a transformation procedure that turns control flow properties of a behavioural form into a structural form. The control flow properties focus purely on control flow in the sense that they abstract away any kind of program data and target only call and return events. By behavioural properties we refer to properties regarding execution behaviour and by structural properties to properties regarding sequences of instructions in the source code or object code. The result presented in this thesis takes this transformation procedure one step further and assume that some methods (or functions or procedures, depending on the programming language) are given in the form of models called flow graph, while the remaining methods are left unspecified. The output then becomes a set of structural constraints for the unspecified methods, which they must adhere to in order for any completion of the partial flow graph to satisfy the behavioural formula. / Om vi har ett program med strikta kontrollflödeskrav och vill garantera att vissa systemkrav uppfylls genom att verifiera formella egenskaper av detta program, samtidigt som en del av kodbasen är i form av ett plug-in eller tredjeparts-bibliotek som vi inte har tillgång till vid verifieringen, så kan proceduren som presenteras i detta examensarbete användas för att generera de systemkrav som de plug-in eller tredjeparts-bibliotek behöver uppfylla för att slutprodukten ska passera de givna systemkraven. Detta examensarbete bygger på en transformationsprocedur som omvandlar kontrollflödesegenskaper på en beteendemässig form till en strukturell form. Kontrollflödes-egenskaperna fokuserar uteslutande på kontrollflöden i den meningen att de abstraherar bort all form av programdata och berör enbart anrop- och retur-händelser. Med beteendemässiga egenskaper syftar vi på egenskaper som berör exekverings-beteende och med strukturella egenskaper syftar vi på egenskaper som berör ordningen på instruktionerna i källkoden eller objektkoden. Resultatet i detta examensarbete tar denna transformationsprocedur ett steg längre och antar att vissa metoder (eller funktioner eller procedurer beroende på programmeringsspråk) är redan givna i formen av modeller som kallas flödesgrafer, medan resten av metoderna fortfarande är ospecificerade. Utdata blir då en mängd av strukturella restriktioner för de ospecificerade metoderna, som de måste följa för att en fulländning av den partiella flödesgrafen ska satisfiera den beteendemässiga formeln.
22

Generation of dynamic control-dependence graphs for binary programs

Pogulis, Jakob January 2014 (has links)
Dynamic analysis of binary files is an area of computer science that has many purposes. It is useful when it comes to debugging software in a development environment and the developer needs to know which statements affected the value of a specific variable. But it is also useful when analyzing a software for potential vulnerabilities, where data controlled by a malicious user could potentially result in the software executing adverse commands or executing malicious code. In this thesis a tool has been developed to perform dynamic analysis of x86 binaries in order to generate dynamic control-dependence graphs over the execution. These graphs can be used to determine which conditional statements that resulted in a certain outcome. The tool has been developed for x86 Linux systems using the dynamic binary instrumentation framework PIN, developed and maintained by Intel. Techniques for utilizing the additional information about the control flow for a program available during the dynamic analysis in order to improve the control flow information have been implemented and tested. The basic theory of dynamic analysis as well as dynamic slicing is discussed, and a basic overview of the implementation of a dynamic analysis tool is presented. The impact on the performance of the dynamic analysis tool for the techniques used to improve the control flow graph is significant, but approaches to improving the performance are discussed.
23

Alternating Control Flow Graph Reconstruction by Combining Constant Propagation and Strided Intervals with Directed Symbolic Execution / Alternerande kontrollflödesgrafsrekonstruktion genom att kombinera propagerande av konstanter och klivande intervaller med riktad symbolisk exekvering

Peterson, Thomas January 2019 (has links)
In this thesis we address the problem of control flow reconstruction in the presence of indirect jumps. We introduce an alternating approach which combines both overand under-approximation to increase the precision of the reconstructed control flow automaton compared to pure over-approximation. More specifically, the abstract interpretation based tool, Jakstab, from earlier work by Kinder, is used for the over-approximation. Furthermore, directed symbolic execution is applied to under-approximate successors of instructions when these can not be over-approximated precisely. The results of our experiments show that our approach can improve the precision of the reconstructed CFA compared to only using Jakstab. However, they reveal that our approach consumes a large amount of memory since it requires extraction and temporary storage of a large number of possible paths to the unresolved locations. As such, its usability is limited to control flow automatas of limited complexity. Further, the results show that strided interval analysis suffers in performance when encountering particularly challenging loops in the control flow automaton. / I detta examensarbete studeras hur rekonstruktion av kontrollflöde kan utföras i närvaro av indirekta hoppinstruktioner. I examensarbetet introduceras ett alternerande tillvägagångssätt som kombinerar både överoch underapproximation för att öka precisionen av den rekonstruerade kontrollflödesautomaten jämfört med endast överapproximation. Mer specifikt används det abstrakta tolknings-baserade verktyget, Jakstab, tidigare utvecklat av Kinder, för överapproximation. Vidare nyttjas riktad symbolisk exekvering för att underapproximera efterträdare till instruktioner när dessa inte kunnat överapproximeras med hög precision. Resultaten av våra experiment visar att vårt tillvägagångssätt kan förbättra precisionen hos den rekonstruerade kontrollflödesautomaten jämfört med att endast använda Jakstab. Dock visar de att vårt tillvägagångssätt kan fordra en stor mängd minne då det kräver extraktion och tillfällig lagring av ett stort antal möjliga vandringar från programmets ingång till de olösta programpositionerna. Därmed är dess användbarhet limiterad till kontrollflödesautomater av begränsad complexitet. Vidare visar resultaten av våra experiment även att analyser baserade på klivande intervaller tappar i prestanda när de möter en vis typ av loopar i kontrollflödesautomaten.
24

Prescriptive Safety-Checks through Automated Proofs for Control-Flow Integrity

Tan, Jiaqi 01 November 2016 (has links)
Embedded software today is pervasive: they can be found everywhere, from coffee makers and medical devices, to cars and aircraft. Embedded software today is also open and connected to the Internet, exposing them to external attacks that can cause its Control-Flow Integrity (CFI) to be violated. Control-Flow Integrity is an important safety property of software, which ensures that the behavior of the software is not inadvertently changed. The violation of CFI in software can cause unintended behaviors, and can even lead to catastrophic incidents in safety-critical systems. This dissertation develops a two-part approach for CFI: (i) prescribing source-code safetychecks, that prevent the root-causes of CFI, that programmers can insert themselves, and (ii) formally proving CFI for the machine-code of programs with source-code safety-checks. First, our prescribed safety-checks, when applied, prevent the root-causes of CFI, thereby enabling software to recover from CFI violations in a customizable way. In addition, our prescribed safety-checks are visible to programmers, empowering them to ensure that the behavior of their software is not inadvertently changed by the prescribed safety-checks. However, programmer-inserted safety-checks may be incomplete. Thus, current techniques for proving CFI, which assume that safety-checks are complete, may not work. Second, this dissertation develops a logic approach that automates formal proofs of CFI for the machine-code of software containing both source-code CFI safety-checks and system calls. We extend an existing trustworthy Hoare logic with new proof rules, proof tactics, and a novel proof-search algorithm, which exploit the principle of local reasoning for safety properties to automatically generate CFI proofs for the machine-code of programs compiled with our prescribed source-code safety-checks. To the best of our knowledge, our approach to CFI is the first to combine programmer-visible source-code enforcement mechanisms for CFI–enabling programmers to customize them and observe that their software is not inadvertently changed–with machine-code proofs of CFI that can be automated, and that does not require a trusted or verified compiler to ensure its proven properties hold in machine-code. We evaluate our CFI approach on realistic embedded software. We evaluate our approach on the MiBench and WCET benchmarks, implementations of common file utilities, and programs interfacing with hardware inputs and outputs on the Raspberry Pi single-board-computer. The variety of our target programs, and our ability to support useful features such as file and hardware inputs and outputs, demonstrate the wide applicability of our approach.
25

On the use of control- and data-ow in fault localization / Sobre o uso de fluxo de controle e de dados para a localizao de defeitos

Ribeiro, Henrique Lemos 19 August 2016 (has links)
Testing and debugging are key tasks during the development cycle. However, they are among the most expensive activities during the development process. To improve the productivity of developers during the debugging process various fault localization techniques have been proposed, being Spectrum-based Fault Localization (SFL), or Coverage-based Fault Localization (CBFL), one of the most promising. SFL techniques pinpoints program elements (e.g., statements, branches, and definition-use associations), sorting them by their suspiciousness. Heuristics are used to rank the most suspicious program elements which are then mapped into lines to be inspected by developers. Although data-flow spectra (definition-use associations) has been shown to perform better than control-flow spectra (statements and branches) to locate the bug site, the high overhead to collect data-flow spectra has prevented their use on industry-level code. A data-flow coverage tool was recently implemented presenting on average 38% run-time overhead for large programs. Such a fairly modest overhead motivates the study of SFL techniques using data-flow information in programs similar to those developed in the industry. To achieve such a goal, we implemented Jaguar (JAva coveraGe faUlt locAlization Ranking), a tool that employ control-flow and data-flow coverage on SFL techniques. The effectiveness and efficiency of both coverages are compared using 173 faulty versions with sizes varying from 10 to 96 KLOC. Ten known SFL heuristics to rank the most suspicious lines are utilized. The results show that the behavior of the heuristics are similar both to control- and data-flow coverage: Kulczynski2 and Mccon perform better for small number of lines investigated (from 5 to 30 lines) while Ochiai performs better when more lines are inspected (30 to 100 lines). The comparison between control- and data-flow coverages shows that data-flow locates more defects in the range of 10 to 50 inspected lines, being up to 22% more effective. Moreover, in the range of 20 and 100 lines, data-flow ranks the bug better than control-flow with statistical significance. However, data-flow is still more expensive than control-flow: it takes from 23% to 245% longer to obtain the most suspicious lines; on average data-flow is 129% more costly. Therefore, our results suggest that data-flow is more effective in locating faults because it tracks more relationships during the program execution. On the other hand, SFL techniques supported by data-flow coverage needs to be improved for practical use at industrial settings / Teste e depuração são tarefas importantes durante o ciclo de desenvolvimento de programas, no entanto, estão entre as atividades mais caras do processo de desenvolvimento. Diversas técnicas de localização de defeitos têm sido propostas a fim de melhorar a produtividade dos desenvolvedores durante o processo de depuração, sendo a localização de defeitos baseados em cobertura de código (Spectrum-based Fault Localization (SFL) uma das mais promissoras. A técnica SFL aponta os elementos de programas (e.g., comandos, ramos e associações definição-uso), ordenando-os por valor de suspeição. Heursticas são usadas para ordenar os elementos mais suspeitos de um programa, que então são mapeados em linhas de código a serem inspecionadas pelos desenvolvedores. Embora informações de fluxo de dados (associações definição-uso) tenham mostrado desempenho melhor do que informações de fluxo de controle (comandos e ramos) para localizar defeitos, o alto custo para coletar cobertura de fluxo de dados tem impedido a sua utilização na prática. Uma ferramenta de cobertura de fluxo de dados foi recentemente implementada apresentando, em média, 38% de sobrecarga em tempo de execução em programas similares aos desenvolvidos na indústria. Tal sobrecarga, bastante modesta, motiva o estudo de SFL usando informações de fluxo de dados. Para atingir esse objetivo, Jaguar (Java coveraGe faUlt locAlization Ranking), uma ferramenta que usa técnicas SFL com cobertura de fluxo de controle e de dados, foi implementada. A eficiência e eficácia de ambos os tipos de coberturas foram comparados usando 173 versões com defeitos de programas com tamanhos variando de 10 a 96 KLOC. Foram usadas dez heursticas conhecidas para ordenar as linhas mais suspeitas. Os resultados mostram que o comportamento das heursticas são similares para fluxo de controle e de dados: Kulczyski2 e Mccon obtêm melhores resultados para números menores de linhas investigadas (de 5 a 30), enquanto Ochiai é melhor quando mais linhas são inspecionadas (de 30 a 100). A comparação entre os dois tipos de cobertura mostra que fluxo de dados localiza mais defeitos em uma variação de 10 a 50 linhas inspecionadas, sendo até 22% mais eficaz. Além disso, na faixa entre 20 e 100 linhas, fluxo de dados classifica com significância estatstica melhor os defeitos. No entanto, fluxo de dados é mais caro do que fluxo de controle: leva de 23% a 245% mais tempo para obter os resultados; fluxo de dados é em média 129% mais custoso. Portanto, os resultados indicam que fluxo de dados é mais eficaz para localizar os defeitos pois rastreia mais relacionamentos durante a execução do programa. Por outro lado, técnicas SFL apoiadas por cobertura de fluxo de dados precisam ser mais eficientes para utilização prática na indústria
26

On the use of control- and data-ow in fault localization / Sobre o uso de fluxo de controle e de dados para a localizao de defeitos

Henrique Lemos Ribeiro 19 August 2016 (has links)
Testing and debugging are key tasks during the development cycle. However, they are among the most expensive activities during the development process. To improve the productivity of developers during the debugging process various fault localization techniques have been proposed, being Spectrum-based Fault Localization (SFL), or Coverage-based Fault Localization (CBFL), one of the most promising. SFL techniques pinpoints program elements (e.g., statements, branches, and definition-use associations), sorting them by their suspiciousness. Heuristics are used to rank the most suspicious program elements which are then mapped into lines to be inspected by developers. Although data-flow spectra (definition-use associations) has been shown to perform better than control-flow spectra (statements and branches) to locate the bug site, the high overhead to collect data-flow spectra has prevented their use on industry-level code. A data-flow coverage tool was recently implemented presenting on average 38% run-time overhead for large programs. Such a fairly modest overhead motivates the study of SFL techniques using data-flow information in programs similar to those developed in the industry. To achieve such a goal, we implemented Jaguar (JAva coveraGe faUlt locAlization Ranking), a tool that employ control-flow and data-flow coverage on SFL techniques. The effectiveness and efficiency of both coverages are compared using 173 faulty versions with sizes varying from 10 to 96 KLOC. Ten known SFL heuristics to rank the most suspicious lines are utilized. The results show that the behavior of the heuristics are similar both to control- and data-flow coverage: Kulczynski2 and Mccon perform better for small number of lines investigated (from 5 to 30 lines) while Ochiai performs better when more lines are inspected (30 to 100 lines). The comparison between control- and data-flow coverages shows that data-flow locates more defects in the range of 10 to 50 inspected lines, being up to 22% more effective. Moreover, in the range of 20 and 100 lines, data-flow ranks the bug better than control-flow with statistical significance. However, data-flow is still more expensive than control-flow: it takes from 23% to 245% longer to obtain the most suspicious lines; on average data-flow is 129% more costly. Therefore, our results suggest that data-flow is more effective in locating faults because it tracks more relationships during the program execution. On the other hand, SFL techniques supported by data-flow coverage needs to be improved for practical use at industrial settings / Teste e depuração são tarefas importantes durante o ciclo de desenvolvimento de programas, no entanto, estão entre as atividades mais caras do processo de desenvolvimento. Diversas técnicas de localização de defeitos têm sido propostas a fim de melhorar a produtividade dos desenvolvedores durante o processo de depuração, sendo a localização de defeitos baseados em cobertura de código (Spectrum-based Fault Localization (SFL) uma das mais promissoras. A técnica SFL aponta os elementos de programas (e.g., comandos, ramos e associações definição-uso), ordenando-os por valor de suspeição. Heursticas são usadas para ordenar os elementos mais suspeitos de um programa, que então são mapeados em linhas de código a serem inspecionadas pelos desenvolvedores. Embora informações de fluxo de dados (associações definição-uso) tenham mostrado desempenho melhor do que informações de fluxo de controle (comandos e ramos) para localizar defeitos, o alto custo para coletar cobertura de fluxo de dados tem impedido a sua utilização na prática. Uma ferramenta de cobertura de fluxo de dados foi recentemente implementada apresentando, em média, 38% de sobrecarga em tempo de execução em programas similares aos desenvolvidos na indústria. Tal sobrecarga, bastante modesta, motiva o estudo de SFL usando informações de fluxo de dados. Para atingir esse objetivo, Jaguar (Java coveraGe faUlt locAlization Ranking), uma ferramenta que usa técnicas SFL com cobertura de fluxo de controle e de dados, foi implementada. A eficiência e eficácia de ambos os tipos de coberturas foram comparados usando 173 versões com defeitos de programas com tamanhos variando de 10 a 96 KLOC. Foram usadas dez heursticas conhecidas para ordenar as linhas mais suspeitas. Os resultados mostram que o comportamento das heursticas são similares para fluxo de controle e de dados: Kulczyski2 e Mccon obtêm melhores resultados para números menores de linhas investigadas (de 5 a 30), enquanto Ochiai é melhor quando mais linhas são inspecionadas (de 30 a 100). A comparação entre os dois tipos de cobertura mostra que fluxo de dados localiza mais defeitos em uma variação de 10 a 50 linhas inspecionadas, sendo até 22% mais eficaz. Além disso, na faixa entre 20 e 100 linhas, fluxo de dados classifica com significância estatstica melhor os defeitos. No entanto, fluxo de dados é mais caro do que fluxo de controle: leva de 23% a 245% mais tempo para obter os resultados; fluxo de dados é em média 129% mais custoso. Portanto, os resultados indicam que fluxo de dados é mais eficaz para localizar os defeitos pois rastreia mais relacionamentos durante a execução do programa. Por outro lado, técnicas SFL apoiadas por cobertura de fluxo de dados precisam ser mais eficientes para utilização prática na indústria
27

Implementação e avaliação da técnica ACCE para detecção e correção de erros de fluxo de controle no LLVM / Implementation and evaluation of the ACCE technique to detection and correction of control flow errors in the LLVM

Parizi, Rafael Baldiati January 2013 (has links)
Técnicas de prevenção de falhas como testes e verificação de software não são suficientes para prover dependabilidade a sistemas, visto que não são capazes de tratar falhas ocasionadas por eventos externos tais como falhas transientes. Nessas situações faz-se necessária a aplicação de técnicas capazes de tratar e tolerar falhas que ocorram durante a execução do software. Grande parte das técnicas de tolerância a falhas transientes está focada na detecção e correção de erros de fluxo de controle, que podem corresponder a até 70% de erros causados por esse tipo de falha. Essas técnicas tratam as falhas em nível de software, alterando o programa com a inserção de novas instruções que devem capturar e corrigir desvios inesperados ocorridos durante a execução do software, sendo ACCE uma das técnicas mais conhecidas. Neste trabalho foi feita uma implementação da técnica ACCE através da criação de um passo de transformação de programas para a infraestrutura de compilação LLVM. ACCE atua sobre a linguagem intermediária dos programas compilados com o LLVM, resultando em portabilidade de linguagem de programação e de arquitetura de máquina. Além da implementação da técnica como um passo de transformação, o LLVM foi utilizado para a realização dos experimentos para avaliar o impacto na eficácia de ACCE quando aplicada em programas previamente otimizados por outras transformações. Esse tipo de avaliação é fundamental uma vez que dificilmente a compilação de programas é feita sem a ativação de otimizações, e, até onde sabemos, nunca havia sido feito anteriormente. Os experimentos deste trabalho foram realizados através de baterias de injeção de falhas em programas da suíte de benchmarks Mibench, divididas em diferentes cenários, que avaliaram ACCE em termos de correção de falhas, quando aplicada em programas otimizados por transformações individuais e também por combinações de transformações. Os resultados dos experimentos realizados mostram que a técnica ACCE é eficaz na correção de falhas, porém, para alguns programas otimizados por determinadas transformações, houve redução na correção de falhas. Esse trabalho analisa os experimentos nos quais houve redução da eficácia de ACCE e aponta possíveis causas. / Computer-based systems are used in several eletronic devices that are, in many cases, responsable by the execution of critical tasks. There are situations where techniques of prevention against faults such as software validation and verification, can not be sufficient for ensuring acceptable rates of confiability, because they are not capable of treating faults that occur in execution time, such as transient faults. Most of the fault tolerance techniques for transient faults are focused in detection and correction of control flow errors, that can correspond to 70% errors caused by this kind of faults. These techniques treat the faults at software level, changing the program with the insertion of new instructions that must to capture and to correct illegal jumps occurred during the software execution, being ACCE the most known technique today. In this work an implementation of the ACCE technique was developed as a program transformation pass in the LLVM compiler infraestructurre. ACCE acts over the intermediate language of LLVM, which results in both programming language and machine architecture language portabilities. Besides the implemetation of the technique like a transformation pass, the LLVM was also used in the experiments for the avaliation of impact in the ACCE eficacy when it is applied into programs previously optimized by others compiler transformations. This evaluation is essential since hardly the compilation of programs is made without the activation of other optimizations. As far as we know this kind of evaluation has never beem made before. The experimental results show that the ACCE techinque is effective in the fault correction, but for some programs optimized by specific transformations, there was a reduction in the correction rate. This work analyses these experiments and gives an explanation for what causes a reduction in the effectiveness of ACCE.
28

Workflow Modeling Using Finite Automata

Khemuka, Atul Ravi 07 November 2003 (has links)
A Workflow is an automation of a business process. In general, it consists of processes and activities, which are represented by well-defined tasks. These include 'Office Automation,' 'Health Care' and service-oriented processes such as 'Online Reservations,' 'Online Bookstores' and 'Insurance Claims,' etc. The entities that execute these tasks are humans, application programs or database management systems. These tasks are related and dependent on one another based on business policies and rules. With rapid increases in application domains that use workflow management systems, there is a need for a framework that can be used to implement these applications. In particular, it is essential to provide a formal technique for defining a problem that can be used by various workflow software product developers. In this work, a formal framework based on finite state automata that facilitate modeling and analysis of workflows is presented. The workflow and its specifications are modeled separately as finite state automata models. We provide a general framework for specifying control flow dependencies in the context of supervisory control theory. We also identify several properties of supervisory control theory and demonstrate their use for conducting the analysis of the workflows.
29

Efficiency of LTTng as a Kernel and Userspace Tracer on Multicore Environment

Guha Anjoy, Romik, Chakraborty, Soumya Kanti January 2010 (has links)
<p><em>With the advent of huge multicore processors, complex hardware, intermingled networks and huge disk storage capabilities the programs that are used in the system and the code which is written to control them are increasingly getting large and often much complicated. There is increase in need of a framework which tracks issues, debugs the program, helps to analyze the reason behind degradation of system and program performance. Another big concern for deploying such a framework in complex systems is to the footprint of the framework upon the setup. LTTng project aims to provide such an effective tracing and debugging toolset for Linux systems. Our work is to measure the effectiveness of LTTng in a Multicore Environment and evaluate its affect on the system and program performance. We incorporate Control and Data Flow analysis of the system and the binaries of LTTng to reach for a conclusion.</em></p>
30

Efficiency of LTTng as a Kernel and Userspace Tracer on Multicore Environment

Guha Anjoy, Romik, Chakraborty, Soumya Kanti January 2010 (has links)
With the advent of huge multicore processors, complex hardware, intermingled networks and huge disk storage capabilities the programs that are used in the system and the code which is written to control them are increasingly getting large and often much complicated. There is increase in need of a framework which tracks issues, debugs the program, helps to analyze the reason behind degradation of system and program performance. Another big concern for deploying such a framework in complex systems is to the footprint of the framework upon the setup. LTTng project aims to provide such an effective tracing and debugging toolset for Linux systems. Our work is to measure the effectiveness of LTTng in a Multicore Environment and evaluate its affect on the system and program performance. We incorporate Control and Data Flow analysis of the system and the binaries of LTTng to reach for a conclusion.

Page generated in 0.646 seconds