• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 29
  • 24
  • 12
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Supporting Software Development Tools with An Awareness of Transparent Program Transformations

Song, Myoungkyu 13 June 2013 (has links)
Programs written in managed languages are compiled to a platform-independent intermediate representation, such as Java bytecode. The relative high level of Java bytecode has engendered a widespread practice of changing the bytecode directly, without modifying the maintained version of the source code. This practice, called bytecode engineering or enhancement, has become indispensable in transparently introducing various concerns, including persistence, distribution, and security. For example, transparent persistence architectures help avoid the entanglement of business and persistence logic in the source code by changing the bytecode directly to synchronize objects with stable storage. With functionality added directly at the bytecode level, the source code reflects only partial semantics of the program. Specifically, the programmer can neither ascertain the program's runtime behavior by browsing its source code, nor map the runtime behavior back to the original source code. This research presents an approach that improves the utility of source-level programming tools by providing enhancement specifications written in a domain-specific language. By interpreting the specifications, a source-level programming tool can gain an awareness of the bytecode enhancements and improve its precision and usability. We demonstrate the applicability of our approach by making a source code editor and a symbolic debugger enhancements-aware. / Master of Science
22

On Efficiency and Accuracy of Data Flow Tracking Systems

Jee, Kangkook January 2015 (has links)
Data Flow Tracking (DFT) is a technique broadly used in a variety of security applications such as attack detection, privacy leak detection, and policy enforcement. Although effective, DFT inherits the high overhead common to in-line monitors which subsequently hinders their adoption in production systems. Typically, the runtime overhead of DFT systems range from 3× to 100× when applied to pure binaries, and 1.5× to 3× when inserted during compilation. Many performance optimization approaches have been introduced to mitigate this problem by relaxing propagation policies under certain conditions but these typically introduce the issue of inaccurate taint tracking that leads to over-tainting or under-tainting. Despite acknowledgement of these performance / accuracy trade-offs, the DFT literature consistently fails to provide insights about their implications. A core reason, we believe, is the lack of established methodologies to understand accuracy. In this dissertation, we attempt to address both efficiency and accuracy issues. To this end, we begin with libdft, a DFT framework for COTS binaries running atop commodity OSes and we then introduce two major optimization approaches based on statically and dynamically analyzing program binaries. The first optimization approach extracts DFT tracking logics and abstracts them using TFA. We then apply classic compiler optimizations to eliminate redundant tracking logic and minimize interference with the target program. As a result, the optimization can achieve 2× speed-up over base-line performance measured for libdft. The second optimization approach decouples the tracking logic from execution to run them in parallel leveraging modern multi-core innovations. We apply his approach again applied to libdft where it can run four times as fast, while concurrently consuming fewer CPU cycles. We then present a generic methodology and tool for measuring the accuracy of arbitrary DFT systems in the context of real applications. With a prototype implementation for the Android framework – TaintMark, we have discovered that TaintDroid’s various performance optimizations lead to serious accuracy issues, and that certain optimizations should be removed to vastly improve accuracy at little performance cost. The TaintMark approach is inspired by blackbox differential testing principles to test for inaccuracies in DFTs, but it also addresses numerous practical challenges that arise when applying those principles to real, complex applications. We introduce the TaintMark methodology by using it to understand taint tracking accuracy trade-offs in TaintDroid, a well-known DFT system for Android. While the aforementioned works focus on the efficiency and accuracy issues of DFT systems that dynamically track data flow, we also explore another design choice that statically tracks information flow by analyzing and instrumenting the application source code. We apply this approach to the different problem of integer error detection in order to reduce the number of false alarmings.
23

Sperner properties of the ideals of a Boolean lattice

McHard, Richard William. January 2009 (has links)
Thesis (Ph. D.)--University of California, Riverside, 2009. / Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 170-172). Also issued in print.
24

Efficient specification-based testing using incremental techniques

Uzuncaova, Engin 10 October 2012 (has links)
As software systems grow in complexity, the need for efficient automated techniques for design, testing and verification becomes more and more critical. Specification-based testing provides an effective approach for checking the correctness of software in general. Constraint-based analysis using specifications enables checking various rich properties by automating generation of test inputs. However, as specifications get more complex, existing analyses face a scalability problem due to state explosion. This dissertation introduces a novel approach to analyze declarative specifications incrementally; presents a constraint prioritization and partitioning methodology to enable efficient incremental analyses; defines a suite of optimizations to improve the analyses further; introduces a novel approach for testing software product lines; and provides an experimental evaluation that shows the feasibility and scalability of the approach. The key insight behind the incremental technique is declarative slicing, which is a new class of optimizations. The optimizations are inspired by traditional program slicing for imperative languages but are applicable to analyzable declarative languages, in general, and Alloy, in particular. We introduce a novel algorithm for slicing declarative models. Given an Alloy model, our fully automatic tool, Kato, partitions the model into a base slice and a derived slice using constraint prioritization. As opposed to the conventional use of the Alloy Analyzer, where models are analyzed as a whole, we perform analysis incrementally, i.e., using several steps. A satisfying solution to the base slice is systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model. We show how our incremental technique enables different analysis tools and solvers to be used in synergy to further optimize our approach. Compared to the conventional use of the Alloy Analyzer, this means even more overall performance enhancements for solving declarative models. Incremental analyses have a natural application in the software product line domain. A product line is a family of programs built from features that are increments in program functionality. Given properties of features as firstorder logic formulas, we automatically generate test inputs for each product in a product line. We show how to map a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experiments using different data structure product lines show that our approach can provide an order of magnitude speed-up over conventional techniques. / text
25

Neural enhancement for multiobjective optimization

Garret, Aaron, Dozier, Gerry V. January 2008 (has links) (PDF)
Dissertation (Ph.D.)--Auburn University, 2008. / Abstract. Vita. Includes bibliographic references (p.176-181).
26

Efficient specification-based testing using incremental techniques

Uzuncaova, Engin. January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2008. / Vita. Includes bibliographical references.
27

Text augmentation : inserting markup into natural language text with PPM models /

Yeates, Stuart Andrew. January 2006 (has links)
Thesis (Ph.D.)--University of Waikato, 2006. / Includes bibliographical references (p. 157-170)
28

Learning syntactic program transformations from examples.

SOUSA, Reudismam Rolim de. 13 September 2018 (has links)
Submitted by Lucienne Costa (lucienneferreira@ufcg.edu.br) on 2018-09-13T20:44:41Z No. of bitstreams: 1 REUDISMAM ROLIM DE SOUSA – TESE (PPGCC) 2018.pdf: 4395945 bytes, checksum: 2241c8bad2cdc8eda86eb53c2e64c227 (MD5) / Made available in DSpace on 2018-09-13T20:44:41Z (GMT). No. of bitstreams: 1 REUDISMAM ROLIM DE SOUSA – TESE (PPGCC) 2018.pdf: 4395945 bytes, checksum: 2241c8bad2cdc8eda86eb53c2e64c227 (MD5) Previous issue date: 2018-08-02 / Capes / Ferramentas como ErrorProne, ReSharper e PMD ajudam os programadores a detectar e/ou remover automaticamente vários padrões de códigos suspeitos, possíveis bugs ou estilo de código incorreto. Essas regras podem ser expressas como quick fixes que detectam e reescrevem padrões de código indesejados. No entanto, estender seus catálogos de regras é complexo e demorado. Nesse contexto, os programadores podem querer executar uma edição repetitiva automaticamente para melhorar sua produtividade, mas as ferramentas disponíveis não a suportam. Além disso, os projetistas de ferramentas podem querer identificar regras úteis para automatizarem. Fenômeno semelhante ocorre em sistemas de tutoria inteligente, onde os instrutores escrevem transformações complicadas que descrevem "falhas comuns" para consertar submissões semelhantes de estudantes a tarefas de programação. Nesta tese, apresentamos duas técnicas. REFAZER, uma técnica para gerar automaticamente transformações de programa. Também propomos REVISAR, nossa técnica para aprender quick fixes em repositórios. Nós instanciamos e avaliamos REFAZER em dois domínios. Primeiro, dados exemplos de edições de código dos alunos para corrigir submissões de tarefas incorretas, aprendemos transformações para corrigir envios de outros alunos com falhas semelhantes. Em nossa avaliação em quatro tarefas de programação de setecentos e vinte alunos, nossa técnica ajudou a corrigir submissões incorretas para 87% dos alunos. No segundo domínio, usamos edições de código repetitivas aplicadas por desenvolvedores ao mesmo projeto para sintetizar a transformação de programa que aplica essas edições a outros locais no código. Em nossa avaliação em 56 cenários de edições repetitivas de três grandes projetos de código aberto em C#, REFAZER aprendeu a transformação pretendida em 84% dos casos e usou apenas 2.9 exemplos em média. Para avaliar REVISAR, selecionamos 9 projetos e REVISAR aprendeu 920 transformações entre projetos. Atuamos como projetistas de ferramentas, inspecionamos as 381 transformações mais comuns e classificamos 32 como quick fixes. Para avaliar a qualidade das quick fixes, realizamos uma survey com 164 programadores de 124 projetos, com os 10 quick fixes que apareceram em mais projetos. Os programadores suportaram 9 (90%) quick fixes. Enviamos 20 pull requests aplicando quick fixes em 9 projetos e, no momento da escrita, os programadores apoiaram 17 (85%) e aceitaram 10 delas. / Tools such as ErrorProne, ReSharper, and PMD help programmers by automatically detecting and/or removing several suspicious code patterns, potential bugs, or instances of bad code style. These rules could be expressed as quick fixes that detect and rewrite unwanted code patterns. However, extending their catalogs of rules is complex and time-consuming. In this context, programmers may want to perform a repetitive edit into their code automatically to improve their productivity, but available tools do not support it. In addition, tool designers may want to identify rules helpful to be automated. A similar phenomenon appears in intelligent tutoring systems where instructors have to write cumbersome code transformations that describe “common faults” to fix similar student submissions to programming assignments. In this thesis, we present two techniques. REFAZER, a technique for automatically generating program transformations. We also propose REVISAR, our technique for learning quick fixes from code repositories. We instantiate and evaluate REFAZER in two domains. First, given examples of code edits used by students to fix incorrect programming assignment submissions, we learn program transformations that can fix other students’ submissions with similar faults. In our evaluation conducted on four programming tasks performed by seven hundred and twenty students, our technique helped to fix incorrect submissions for 87% of the students. In the second domain, we use repetitive code edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code. In our evaluation conducted on 56 scenarios of repetitive edits taken from three large C# open-source projects, REFAZER learns the intended program transformation in 84% of the cases and using only 2.9 examples on average. To evaluate REVISAR, we select 9 projects, and REVISAR learns 920 transformations across projects. We acted as tool designers, inspected the most common 381 transformations and classified 32 as quick fixes. To assess the quality of the quick fixes, we performed a survey with 164 programmers from 124 projects, showing the 10 quick fixes that appeared in most projects. Programmers supported 9 (90%) quick fixes. We submitted 20 pull requests applying our quick fixes to 9 projects and, at the time of the writing, programmers supported 17 (85%) and accepted 10 of them.
29

Designing an object-oriented decompiler : Decompilation support for Interactive Disassembler Pro / Design av en objekt-orienterad dekompilator : Dekompilatorstöd för Interactive Disassembler Pro

Eriksson, David January 2002 (has links)
Decompilation, or reverse compilation, takes a computer program and produces high-level code that works like the original source code. This makes it easier to understand a computer program when source code is not available. However, there are very few tools for decompilation available today. This report describes the design and implementation of Desquirr, a decompilation plug-in for Interactive Disassembler Pro. Desquirr has an object-oriented design and performs basic decompilation of programs running on Intel x86 processors. The low-level analysis uses knowledge about specialized compiler constructs, called idioms, to perform a more accurate decompilation. Desquirr implements data flow analysis, meaning the conversion from primitive machine code instructions into code in a high-level language. The major part of the data flow analysis is the Register Copy Propagation which builds high-level expressions from primitive instructions. Control flow analysis, meaning to restore high-level language constructs such as if/else and for loops, is not implemented. A high level representation of a piece of machine code contains the same information as an assembly language representation of the same machine code, but in a format that is easier to comprehend. Symbols such as ?*? and ?+? are used in high-level language expressions, compared to instructions such as ?mul? and ?add? in assembly language. Two small test cases which compares decompiled code with assembly language shows promising results in reducing the amount of information needed to comprehend a program. / Dekompilering, eller omvänd kompilering, tar ett datorprogram och omvandlar det till högnivåspråk som fungerar som den ursprungliga källkoden. Detta gör det lättare att förstå ett datorprogram när källkod inte finns tillgänglig. Det finns väldigt få verktyg för dekompilering tillgängliga idag. Den här rapporten beskriver design och implementation av Desquirr, en dekomplator-plugin för Interactive Disassembler Pro. Desquirr har en objekt-orienterad design och utför grundläggande dekompilering av program som kör på Intel x86-processorer.
30

OpenModelica Support for Figaro Extensions Regarding Fault Analysis / OpenModelicastöd för felanalys med användning av Figarosystemet

Carlqvist, Alexander January 2014 (has links)
The practical result of this thesis is an extension to OpenModelica that transforms Modelica into Figaro. Modelica is an equation-based object-oriented modeling language. OpenModelica is an open source implementation of Modelica. Figaro is a language used for reliability modeling. Figaro is a general representation formalism that can be transformed into reliability models like fault trees. Figaro was designed for fault analysis. Modelica was designed to model the behavior of physical systems and run dynamic simulations. Because of that, you cannot just break components and analyze what happens to a system. This work enables us to have fault analysis in OpenModelica by transforming our Modelica model into a Figaro model and invoke the Figaro compiler. This lets us break particular components and see what happens to the system. This work is part of an ongoing effort to integrate several modeling environments.

Page generated in 0.1568 seconds