Spelling suggestions: "subject:"deprogram 2analysis"" "subject:"deprogram 3analysis""
11 |
Static WCET Analysis Based on Abstract Interpretation and Counting of ElementsBygde, Stefan January 2010 (has links)
<p>In a real-time system, it is crucial to ensure that all tasks of the system holdtheir deadlines. A missed deadline in a real-time system means that the systemhas not been able to function correctly. If the system is safety critical, this canlead to disaster. To ensure that all tasks keep their deadlines, the Worst-CaseExecution Time (WCET) of these tasks has to be known. This can be done bymeasuring the execution times of a task, however, this is inflexible, time consumingand in general not safe (i.e., the worst-casemight not be found). Unlessthe task is measured with all possible input combinations and configurations,which is in most cases out of the question, there is no way to guarantee that thelongest measured time actually corresponds to the real worst case.Static analysis analyses a safe model of the hardware together with thesource or object code of a program to derive an estimate of theWCET. This estimateis guaranteed to be equal to or greater than the real WCET. This is doneby making calculations which in all steps make sure that the time is exactlyor conservatively estimated. In many cases, however, the execution time of atask or a program is highly dependent on the given input. Thus, the estimatedworst case may correspond to some input or configuration which is rarely (ornever) used in practice. For such systems, where execution time is highly inputdependent, a more accurate timing analysis which take input into considerationis desired.In this thesis we present a framework based on abstract interpretation andcounting of possible semantic states of a program. This is a general methodof WCET analysis, which is language independent and platform independent.The two main applications of this framework are a loop bound analysis and aparametric analysis. The loop bound analysis can be used to quickly find upperbounds for loops in a program while the parametric framework provides aninput-dependent estimation of theWCET. The input-dependent estimation cangive much more accurate estimates if the input is known at run-time.</p> / PROGRESS
|
12 |
Towards a generic framework for the abstract interpretation of JavaPollet, Isabelle 23 April 2004 (has links)
The application field for static analysis of Java programs is getting broader, ranging from compiler optimizations (like dynamic dispatch elimination) to security issues. Many of those analyses include type analyses. We propose a `generic' framework, which improves on previous type analyses by introducing structural information. Moreover, structural information allows us to easily extend the framework to perform many different kinds of analyses.
The framework is based on the abstract interpretation methodology. It is composed of a standard semantics, a family of abstract domains, an abstract semantics based on these domains and a
post-fixpoint algorithm to compute the abstract semantics. The analysis is limited to a representative subset of Java, without concurrency.
A complete prototype of the framework allows us to illustrate the accuracy and the efficiency of the approach (for moderately sized programs).
|
13 |
Lifting the Abstraction Level of Compiler TransformationsTang, Xiaolong 16 December 2013 (has links)
Production compilers implement optimizing transformation rules for built-in types. What justifies applying these optimizing rules is the axioms that hold for built-in types and the built-in operations supported by these types. Similar axioms also hold for user-defined types and the operations defined on them, and therefore justify a set of optimization rules that may apply to user-defined types. Production compilers, however, do not attempt to construct and apply these optimization rules to user-defined types.
Built-in types together the axioms that apply to them are instances of more general algebraic structures. So are user-defined types and their associated axioms. We use the technique of generic programming, a programming paradigm to design efficient, reusable software libraries, to identify the commonality of classes of types, whether built-in or user-defined, convey the semantics of the classes of types to compilers, design scalable and effective program analysis for them, and eventually apply optimizing rules to the operations on them.
In generic programming, algorithms and data structures are defined in terms of such algebraic structures. The same definitions are reused for many types, both built-in and user-defined. This dissertation applies generic programming to compiler analyses and transformations. Analyses and transformations are specified for general algebraic structures, and they apply to all types, both built-in and primitive types.
|
14 |
Stochastic abstraction of programs : towards performance-driven developmentSmith, Michael James Andrew January 2010 (has links)
Distributed computer systems are becoming increasingly prevalent, thanks to modern technology, and this leads to significant challenges for the software developers of these systems. In particular, in order to provide a certain service level agreement with users, the performance characteristics of the system are critical. However, developers today typically consider performance only in the later stages of development, when it may be too late to make major changes to the design. In this thesis, we propose a performance driven approach to development — based around tool support that allows developers to use performance modelling techniques, while still working at the level of program code. There are two central themes to the thesis. The first is to automatically relate performance models to program code. We define the Simple Imperative Remote Invocation Language (SIRIL), and provide a probabilistic semantics that interprets a program as a Markov chain. To make such an interpretation both computable and efficient, we develop an abstract interpretation of the semantics, from which we can derive a Performance Evaluation Process Algebra (PEPA) model of the system. This is based around abstracting the domain of variables to truncated multivariate normal measures. The second theme of the thesis is to analyse large performance models by means of compositional abstraction. We use two abstraction techniques based on aggregation of states — abstract Markov chains, and stochastic bounds — and apply both of them compositionally to PEPA models. This allows us to model check properties in the three-valued Continuous Stochastic Logic (CSL), on abstracted models. We have implemented an extension to the Eclipse plug-in for PEPA, which provides a graphical interface for specifying which states in the model to aggregate, and for performing the model checking.
|
15 |
Dekompilátor pro Objective-C / A decompiler for Objective-CMráček, Jakub January 2016 (has links)
Objective-C is a popular programming language primarily used on the OS X and iOS platforms. We present a practical approach to decompilation of programs written in Objective-C and compiled for the x86 and AArch64 architectures using LLVM. Based on already-known generic reverse engineering techniques and compiler theory, this thesis analyzes new challenges and opportunities that occur in Objective-C binaries. We then offer solutions and algorithms that allow a decompiler to better recognize the high-level structures commonly used in Objective-C source codes. The thesis introduces an implementation of a new decompiler called "Cricket", an interactive GUI application for OS X, which uses the described algorithms and pattern matching methods to reconstruct source code in Objective-C. The decompiler tries to maximize readability of the output and allows user interaction to further modify the generated source code. The implemented software is then evaluated on a popular open-source framework and the results are compared to a competing product. Powered by TCPDF (www.tcpdf.org)
|
16 |
FORCED EXECUTION FOR SECURITY ANALYSIS OF SOFTWARE WITHOUT SOURCE CODEFei Peng (10682163) 03 May 2021 (has links)
<div><div><div><p>Binary code analysis is widely used in many applications, including reverse engineering, software forensics and security. It is very critical in these applications, since the analysis of binary code does not require source code to be available. For example, in one of the security applications, given a potentially malicious executable file, binary analysis can help building human inspectable representations such as control flow graph and call graph.</p><p>Existing binary analysis can be roughly classified into two categories, that are static analysis, and dynamic analysis. Both types of analysis have their own strengths and limitations. Static binary analysis is based on the result of scanning the binary code without executing it. It usually has good code coverage, but the analysis results are sometimes not quite accurate due to the lack of dynamic execution information. Dynamic binary analysis, on the other hand, is based on executing the binary on a set of inputs. On the contrast, the results are usually accurate but heavily rely on the coverage of the test inputs, which sometimes do not exist.</p><p>In this thesis, we first present a novel systematic binary analysis framework called X-Force. Basically, X-Force can force the binary to execute without using any inputs or proper environment setup. As part of the design of our framework, we have proposed a number of techniques, that includes (1) path exploration module which can drive the program to execute different paths; (2) a crash-free execution model that could detect and recover from execution exceptions properly; (3) overcoming a large number of technical challenges in making the technique work on real world binaries.</p><p>Although X-Force is a highly effective method to penetrate malware self-protection and expose hidden behavior, it is very heavy-weight. The reason is that it requires tracing individual instructions, reasoning about pointer alias relations on-the-fly, and repairing invalid pointers by on-demand memory allocation. To further solve this problem, we develop a light-weight and practical forced execution technique. Without losing analysis precision, it avoids tracking individual instructions and on-demand allocation. Under our scheme, a forced execution is very similar to a native one. It features a novel memory pre-planning phase that pre-allocates a large memory buffer, and then initializes the buffer, and variables in the subject binary, with carefully crafted values in a random fashion before the real execution. The pre-planning is designed in such a way that dereferencing an invalid pointer has a very large chance to fall into the pre-allocated region and hence does not cause any exception, and semantically unrelated invalid pointer dereferences highly likely access disjoint (pre-allocated) memory regions, avoiding state corruptions with probabilistic guarantees.</p></div></div></div>
|
17 |
Exploring Abstraction Techniques for Scalable Bit-Precise Verification of Embedded SoftwareHe, Nannan 01 June 2009 (has links)
Conventional testing has become inadequate to satisfy rigorous reliability requirements of embedded software that is playing an increasingly important role in many safety critical applications. Automatic formal verification is a viable avenue for ensuring the reliability of such software. Recently, more and more formal verification techniques have begun modeling a non-Boolean data variable as a bit-vector with bounded width (i.e. a vector of multiple bits like 32- or 64- bits) to implement bit-precise verification. One major challenge in the scalable application of such bit-precise verification on real-world embedded software is that the state space for verification can be intractably large.
In this dissertation, several abstraction techniques are explored to deal with this scalability challenge in the bit-precise verification of embedded software. First, we propose a tight integration of program slicing, which is an important static program analysis technique, with bounded model checking (BMC). While many software verification tools apply program slicing as a separate preprocessing step, we integrate slicing operations into our model construction and reduction process and enhance them with compilation optimization techniques to compute accurate program slices. We also apply a proof-based abstraction-refinement framework to further remove those program segments irrelevant to the property being verified.
Next, we present a method of using symbolic simulation for scalable formal verification. The simulation involves distinguishing X as symbolic values to abstract concrete variables' values. Also, the method embeds this symbolic simulation in a counterexample-guided abstraction-refinement framework to automatically construct and verify an abstract model, which has a smaller state space than that of the original concrete program.
This dissertation also presents our efforts on using two common testability metrics — controllability metric (CM) and observability metric (OM) — as the high-level structural guidance for scalable bit-precise verification. A new abstraction approach is proposed based on the concept of under- and over-approximation to efficiently solve bit-vector formulas generated from embedded software verification instances. These instances include both complicated arithmetic computations and intensive control structures. Our approach applies CM and OM to assist the abstraction refinement procedure in two ways: (1) it uses CM and OM to guide the construction of a simple under-approximate model, which includes only a subset of execution paths in a verification instance, so that a counterexample that refutes the instance can be obtained with reduced effort, and (2) in order to reduce the cost of using proof-based refinement alone, it uses OM heuristics to guide the restoration of additional verification-relevant formula constraints with low computational cost for refinement. Experiments show a significant reduction of the solving time compared to state-of-the-art solvers for the bit-vector arithmetic.
This dissertation finally proposes an efficient algorithm to discover non-uniform encoding widths of individual variables in the verification model, which may be smaller than their original modeling width but sufficient for the verification. Our algorithm distinguishes itself from existing approaches in that it is path-oriented; it takes advantage of CM and OM values to guide the computation of the initial, non-uniform encoding widths, and the effective adjustment of these widths along different paths, until the property is verified. It can restrict the search from those paths that are deemed less favorable or have been searched in previous steps, thus simplifying the problem. Experiments demonstrate that our algorithm can significantly speed up the verification especially in searching for a counterexample that violates the property under verification. / Ph. D.
|
18 |
Investigating Trade-Offs in Mitigating Double-Fetches Introduced by Compile-Time Optimizations : Analysing the Impact of Security Measures on Software PerformanceFransson, William January 2024 (has links)
In software security, balancing the need for robust protection with performance considerations is a critical challenge. Mitigation techniques are essential to defend against various types of attacks, but they can also introduce performance overheads. Meanwhile, compilers provide optimizations that aim to enhance performance but inadvertently introduce security vulnerabilities, such as double-fetches. This thesis explores the trade-offs associated with disabling compiler optimisation options to enhance security against such vulnerabilities. By examining various optimisation levels (-O1, -O2, -O3, -Ofast) in GNU Compiler Collectio (GCC) and LLVM compilers, we quantitatively analyse their impact on execution time, memory usage, and complexity of the binaries. Our study reveals that while opting out of optimisations can significantly improve security by eliminating double-fetch bugs, it also leads to increased execution time and larger binary sizes. These findings underscore developers' need to make informed choices about optimisations, balancing security concerns with performance requirements. This work contributes to a deeper understanding of the security-performance trade-offs in software development and provides a foundation for further research into compiler optimisations and security.
|
19 |
Compilation Techniques, Algorithms, and Data Structures for Efficient and Expressive Data Processing SystemsSupun Madusha Bandara Abeysinghe Tennakoon Mudiyanselage (17454786) 30 November 2023 (has links)
<pre>The proliferation of digital data, driven by factors like social media, e-commerce, etc., has created an increasing demand for highly processed data at higher levels of fidelity, which puts increasing demands on modern data processing systems. In the past, data processing systems faced bottlenecks due to limited main memory availability. However, as main memory becomes more abundant, their optimization focus has shifted from disk I/O to optimized computation through techniques like compilation. This dissertation addresses several critical limitations within such compilation-based data processing systems.<br><br>In modern data analytics pipelines, combination of workloads from various paradigms, such as traditional DBMS and Machine Learning, is common. <br>These pipelines are typically managed by specialized systems designed for specific workload types. While these specialized systems optimize their individual performance, substantial performance loss occurs when they are combined to handle mixed workloads. This loss is mainly due to overheads at system boundaries, including data copying and format conversions, as well as the general inability to perform cross-system optimizations.<br><br>This dissertation tackles this problem in two angles. First, it proposes an efficient post-hoc integration of individual systems using generative programming via the construction of common intermediate layers. This approach preserves the best-of-breed performance of individual workloads while achieving state-of-the-art performance for combined workloads. Second, we introduce a high-level query language capable of expressing various workload types, acting as a general substrate to implement combined workloads. This allows the generation of optimized code for end-to-end workloads through<br>the construction of an intermediate representation (IR).<br><br>The dissertation then shifts focus to data processing systems used for incremental view maintenance (IVM). While existing IVM systems achieve high performance through compilation and novel algorithms, they have limitations in handling specific query classes. Notably, they are incapable of handling queries involving correlated nested aggregate subqueries. To address this, our work proposes a novel indexing scheme based on a new data structure and a corresponding set of algorithms that fully incrementalize such queries. This approach result in substantial asymptotic speedups and order-of-magnitude performance improvements for workloads of practical importance.<br><br>Finally, the dissertation explores efficient and expressive fixed-point computations, with a focus on Datalog--a language widely used for declarative program analysis. Although existing Datalog engines rely on compilation and specialized code generation to achieve performance, they lack the flexibility to support extensions required for complex program analysis. Our work introduces a new Datalog engine built using generative programming techniques that offers both flexibility and state-of-the-art performance through specialized code generation.</pre><p></p>
|
20 |
Domain-specific languages for massively parallel processorsCartey, Luke January 2013 (has links)
Massively Parallel Processors provide significantly higher peak performance figures than other forms of general purpose processors. However, this comes at a cost to the developer, who needs to deal with an increasingly complicated piece of hardware, for which applications need to be tweaked and optimised to achieve high performance. Domain-specific languages have been proposed as a potential solution to this complexity problem: generating GPU applications from high-level, declarative specifications. This thesis explores two related ideas: firstly, is it practical to synthesise DSLs from high-level languages, and secondly, how can we simplify the creation of such DSLs? This thesis proposes a novel approach whereby rather than considering single domains, we consider collections of collaborative domains in order to share common features and thus reduce the cost of development. We achieve this using a DSLs-within-a-DSL approach: a custom designed host language, into which extensions may be embedded. In order to ground our approach in a real case-study, we propose, design and develop a DSLs-within-a-DSL framework for bioinformatics. We use a restricted recursive functional language as the host language, and embed new DSLs into this language. Importantly, we describe how we can use a combination of novel and adopted automatic parallelisation techniques to synthesise a massively-parallel program for a GPU. This automatic parallelisation, achieved through the discovery of a schedule, and program synthesis techniques using the polyhedral model, interacts productively with our embedded extensions. To further simplify development, we provide a series of customisable heuristics for defining GPU parameters such as the block size (number of threads), grid size and location in the memory hierarchy of data-structures. This encapsulates GPU expertise within the compiler itself. We finally demonstrate that the total combination of these techniques results in applications with competitive performance, at much lower development cost and greater flexibility than comparable hand-coded applications.
|
Page generated in 0.0681 seconds