• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Practical Analysis of the Dynamic Characteristics of JavaScript

Wei, Shiyi 05 October 2015 (has links)
JavaScript is a dynamic object-oriented programming language, which is designed with flexible programming mechanisms. JavaScript is widely used in developing sophisticated software systems, especially web applications. Despite of its popularity, there is a lack of software tools that support JavaScript for software engineering clients. Dataflow analysis approximates software behavior by analyzing the program code; it is the foundation for many software tools. However, several unique features of JavaScript render existing dataflow analysis techniques ineffective. Reflective constructs, generating code at runtime, make it difficult to acquire the complete program at compile time. Dynamic typing, resulting in changes in object behavior, poses a challenge for building accurate models of objects. Different functionalities can be observed when a function is variadic; the variance of the function behavior may be caused by the arguments whose values can only be known at runtime. Object constructors may be polymorphic such that objects created by the same constructor may contain different properties. In addition to object-oriented programming, JavaScript supports paradigms of functional and procedural programming; this feature renders dataflow analysis techniques ineffective when a JavaScript application uses multiple paradigms. Dataflow analysis needs to handle these challenges. In this work, we present an analysis framework and several dataflow analyses that can handle dynamic features in JavaScript. The first contribution of our work is the design and instantiation of the JavaScript Blended Analysis Framework (JSBAF). This general-purpose and flexible framework judiciously combines dynamic and static analyses. We have implemented an instance of JSBAF, blended taint analysis, to demonstrate the practicality of the framework. Our second contribution is an novel context-sensitive points-to analysis for JavaScript that accurately models object property changes. This algorithm uses a new program representation that enables partial flow-sensitive analysis, a more accurate object representation, and an expanded points-to graph. We have defined parameterized state sensitivity (i.e., k-state sensitivity) and evaluated the effectiveness of 1-state-sensitive analysis as the static phase of JSBAF. The third contribution of our work is an adaptive context-sensitive analysis that selectively applies context-sensitive analysis on the function level. This two-staged adaptive analysis extracts function characteristics from an inexpensive points-to analysis and uses learning-based heuristics to decide on an appropriate context-sensitive analysis per function. The experimental results show that the adaptive analysis is more precise than any single context-sensitive analysis for several programs in the benchmarks, especially for those multi-paradigm programs. / Ph. D.
2

Comparison of Points-to Analyses

Gutzmann, Tobias January 2008 (has links)
<p>Points-to analysis is a static program analysis which computes possible reference relations between different parts of a program. It serves as input to many high-level analyses. Points-to analyses differ, among others, in flow- and context-sensitivity, program representation, and object abstraction. Most program representations used for points-to analysis are sparse representations which abstract from, e.g., primitive data types and intra-procedural control-flow. Thus, a certain degree of information is sacrificed for compact program representation, which results in scalable performance. In this thesis, we present a framework which allows building different versions of Points-to SSA (P2SSA), a sparse, Memory SSA based program representation. Distinct instantiations of P2SSA contain different levels of abstraction from a program's full representation. We present another framework which allows running Points-to analyses on these program representations. We use these two frameworks to instantiate different versions of P2SSA and compare them in terms of analysis precision and execution time.</p>
3

Comparison of Points-to Analyses

Gutzmann, Tobias January 2008 (has links)
Points-to analysis is a static program analysis which computes possible reference relations between different parts of a program. It serves as input to many high-level analyses. Points-to analyses differ, among others, in flow- and context-sensitivity, program representation, and object abstraction. Most program representations used for points-to analysis are sparse representations which abstract from, e.g., primitive data types and intra-procedural control-flow. Thus, a certain degree of information is sacrificed for compact program representation, which results in scalable performance. In this thesis, we present a framework which allows building different versions of Points-to SSA (P2SSA), a sparse, Memory SSA based program representation. Distinct instantiations of P2SSA contain different levels of abstraction from a program's full representation. We present another framework which allows running Points-to analyses on these program representations. We use these two frameworks to instantiate different versions of P2SSA and compare them in terms of analysis precision and execution time.
4

Access Path Based Dataflow Analysis For Sequential And Concurrent Programs

Arnab De, * 12 1900 (has links) (PDF)
In this thesis, we have developed a flow-sensitive data flow analysis framework for value set analyses for Java-like languages. Our analysis frame work is based on access paths—a variable followed by zero or more field accesses. We express our abstract states as maps from bounded access paths to abstract value sets. Using access paths instead of allocation sites enables us to perform strong updates on assignments to dynamically allocated memory locations. We also describe several optimizations to reduce the number of access paths that need to be tracked in our analysis. We have instantiated this frame work for flow-sensitive pointer and null-pointer analysis for Java. We have implemented our analysis inside the Chord frame work. A major part of our implementation is written declaratively using Datalog. We leverage the use of BDDs in Chord for keeping our memory usage low. We show that our analysis is much more precise and faster than traditional flow-sensitive and flow-insensitive pointer and null-pointer analysis for Java. We further extend our access path based analysis frame work to concurrent Java programs. We use the synchronization structure of the programs to transfer abstract states from one thread to another. Therefore, we do not need to make conservative assumptions about reads or writes to shared memory. We prove our analysis to be sound for the happens-before memory model, which is weaker than most common memory models, including sequential consistency and the Java Memory Model. We implement a null-pointer analysis for concurrent Java programs and show it to be more precise than the traditional analysis.
5

Towards a Gold Standard for Points-to Analysis

Gutzmann, Tobias January 2010 (has links)
Points-to analysis is a static program analysis that computes reference informationfor a given input program. It serves as input to many client applicationsin optimizing compilers and software engineering tools. Unfortunately, the Gold Standard – i.e., the exact reference information for a given program– is impossible to compute automatically for all but trivial cases, and thus, little can been said about the accuracy of points-to analysis. This thesis aims at paving the way towards a Gold Standard for points-to analysis. For this, we discuss theoretical implications and practical challenges that occur when comparing results obtained by different points-to analyses. We also show ways to improve points-to analysis by different means, e.g., combining different analysis implementations, and a novel approach to path sensitivity. We support our theories with a number of experiments.
6

Benchmarking Points-to Analysis

Gutzmann, Tobias January 2013 (has links)
Points-to analysis is a static program analysis that, simply put, computes which objects created at certain points of a given program might show up at which other points of the same program. In particular, it computes possible targets of a call and possible objects referenced by a field. Such information is essential input to many client applications in optimizing compilers and software engineering tools. Comparing experimental results with respect to accuracy and performance is required in order to distinguish the promising from the less promising approaches to points-to analysis. Unfortunately, comparing the accuracy of two different points-to analysis implementations is difficult, as there are many pitfalls in the details. In particular, there are no standardized means to perform such a comparison, i.e, no benchmark suite - a set of programs with well-defined rules of how to compare different points-to analysis results - exists. Therefore, different researchers use their own means to evaluate their approaches to points-to analysis. To complicate matters, even the same researchers do not stick to the same evaluation methods, which often makes it impossible to take two research publications and reliably tell which one describes the more accurate points-to analysis. In this thesis, we define a methodology on how to benchmark points-to analysis. We create a benchmark suite, compare three different points-to analysis implementations with each other based on this methodology, and explain differences in analysis accuracy. We also argue for the need of a Gold Standard, i.e., a set of benchmark programs with exact analysis results. Such a Gold Standard is often required to compare points-to analysis results, and it also allows to assess the exact accuracy of points-to analysis results. Since such a Gold Standard cannot be computed automatically, it needs to be created semi-automatically by the research community. We propose a process for creating a Gold Standard based on under-approximating it through optimistic (dynamic) analysis and over-approximating it through conservative (static) analysis. With the help of improved static and dynamic points-to analysis and expert knowledge about benchmark programs, we present a first attempt towards a Gold Standard. We also provide a Web-based benchmarking platform, through which researchers can compare their own experimental results with those of other researchers, and can contribute towards the creation of a Gold Standard.
7

A Concurrent IFDS Dataflow Analysis Algorithm Using Actors

Rodriguez, Jonathan David January 2010 (has links)
There has recently been a resurgence in interest in techniques for effective programming of multi-core computers. Most programmers find general-purpose concurrent programming to be extremely difficult. This difficulty severely limits the number of applications that currently benefit from multi-core computers. There already exist many concurrent solutions for the class of regular applications, which include various algorithms for linear algebra. For the class of irregular applications, which operate on dynamic and pointer- and graph-based structures, efficient concurrent solutions have so far remained elusive. Dataflow analysis applications, which are often found in compilers and program analysis tools, have received particularly little attention with regard to execution on multi-core machines. Operating on the theory that the Actor model, which structures computations as systems of asynchronously-communicating entities, is a more appropriate method for representing irregular algorithms than the shared-memory model, this work presents a concurrent Actor-based formulation of the IFDS, or Interprocedural Finite Distributive Subset, dataflow analysis algorithm. The implementation of this algorithm is done using the Scala language and its Actors library. This algorithm achieves significant speedup on multi-core machines without using any optimistic execution. This work contributes to Actor research by showing how the Actor model can be practically applied to a dataflow analysis problem. This work contributes to static analysis research by showing how a dataflow analysis algorithm can effectively make use of multi-core machines, allowing the possibility of faster and more precise analyses.
8

Towards a Gold Standard for Points-to Analysis

Gutzmann, Tobias January 2010 (has links)
<p>Points-to analysis is a static program analysis that computes reference informationfor a given input program. It serves as input to many client applicationsin optimizing compilers and software engineering tools. Unfortunately, the Gold Standard – i.e., the exact reference information for a given program– is impossible to compute automatically for all but trivial cases, and thus, little can been said about the accuracy of points-to analysis.</p><p>This thesis aims at paving the way towards a Gold Standard for points-to analysis. For this, we discuss theoretical implications and practical challenges that occur when comparing results obtained by different points-to analyses. We also show ways to improve points-to analysis by different means, e.g., combining different analysis implementations, and a novel approach to path sensitivity.</p><p>We support our theories with a number of experiments.</p>
9

A Concurrent IFDS Dataflow Analysis Algorithm Using Actors

Rodriguez, Jonathan David January 2010 (has links)
There has recently been a resurgence in interest in techniques for effective programming of multi-core computers. Most programmers find general-purpose concurrent programming to be extremely difficult. This difficulty severely limits the number of applications that currently benefit from multi-core computers. There already exist many concurrent solutions for the class of regular applications, which include various algorithms for linear algebra. For the class of irregular applications, which operate on dynamic and pointer- and graph-based structures, efficient concurrent solutions have so far remained elusive. Dataflow analysis applications, which are often found in compilers and program analysis tools, have received particularly little attention with regard to execution on multi-core machines. Operating on the theory that the Actor model, which structures computations as systems of asynchronously-communicating entities, is a more appropriate method for representing irregular algorithms than the shared-memory model, this work presents a concurrent Actor-based formulation of the IFDS, or Interprocedural Finite Distributive Subset, dataflow analysis algorithm. The implementation of this algorithm is done using the Scala language and its Actors library. This algorithm achieves significant speedup on multi-core machines without using any optimistic execution. This work contributes to Actor research by showing how the Actor model can be practically applied to a dataflow analysis problem. This work contributes to static analysis research by showing how a dataflow analysis algorithm can effectively make use of multi-core machines, allowing the possibility of faster and more precise analyses.
10

GSM tinklo abonentų vietos duomenų srautų tyrimas / GSM network's subscribers' location dataflow analysis

Jatkonis, Eimantas 29 May 2006 (has links)
Location dataflow analysis of mobile objects in GSM network is performed in this project. Dataflow are analyzed in these network nodes: BTS, BSC, MSC, and HLR. During research it is analyzed how data flows changes during day period, how these changes influence network nodes. The possibilities to gather and store data about subscribers' location are explored. The main target of this project is to set guidelines for implementation of European Union directive ST15449 in real GSM networks. Statistic data about data flow types is supplied by real GSM network operator. Experiments were performed using emulator of GSM network data flows. Additional features necessary for analysis were specified and implemented. After analysis it is determined that lowest impact of location dataflow is for BSC component. It is proposed to implement any location data gathering device in BSC nodes.

Page generated in 0.0555 seconds