• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 12
  • 11
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Rozšíření platformy pro analýzu datových toků o podporu knihoven na vkládání závislostí / Extending Data Lineage Analysis Platform with Support for Dependency Injection Frameworks

Riedel, Lukáš January 2021 (has links)
Data lineage forms an important aspect of today's enterprise environment. MANTA Flow is a data lineage analysis platform that already has basic support for analysis of Java programs, provided by one of its components called Bytecode Scanner. Neverthe- less, there are very few applications in today's enterprise environment that do not use dependency injection at least in a very limited way. Therefore, we present an extension of Bytecode Scanner in the MANTA Flow platform to support data lineage analysis of dependency injection frameworks as well. The extension is able to process even complex definitions of standard dependency injection containers. Since the dependency injec- tion influences a selection of method call targets, we also provide a description of call graph structure and its modification to support dependency injection. Last, we use this infrastructure to design and implement a plugin into Bytecode Scanner for the Spring Framework, a popular dependency injection framework targeting Java Platform. The plugin has been successfully tested on a small but realistic software system that can read data from a file, transform them, and write them into a database. 1
12

A Framework for Call Graph Construction

Honar, Elnaz, Mortazavi Jahromi, Seyed AmirHossein January 2010 (has links)
<p>In object oriented programming, a Call Graph represents the calling relationships between the program’s methods. To be more precise, a Call Graph is a rooted directed graph where each node of the graph represents a method and each edge <em>(u, v)</em> represents a method call from method <em>u </em>to method <em>v.</em></p><p><em></em>Focus of this thesis is on building a framework for Call Graph construction algorithms which can be used in program analysis. Our framework is able to be initialized by different front-ends and constructs various Call Graph algorithms. Here, we instantiate framework with two bytecode readers (ASM and Soot) as front-ends and implement three Call Graph construction algorithms (CHA, RTA and CTA).<em></em></p><p>At first, we used two above mentioned bytecode readers to read the bytecode of a specific Java program, then we found reachable methods for each invoked method; meanwhile we kept obtained details on our own data structures.  Creating data structures for storing required information about Classes, Methods, Fields and Statements, gives us a great opportunity to implement an independent framework for applying well known Call Graph algorithms. As a result of these data structures, Call Graph construction will not depend on bytecode readers; since, whenever we read the bytecode of a program, we accumulate all necessary points in pre-defined data structures and implement our Call Graphs based on this accumulated data.</p><p>Finally, the result is a framework for different Call Graph construction algorithms which is the goal of this thesis. We tested and evaluated the algorithms with a variety of programs as the benchmark and compared the bytecode readers besides the three Call Graph algorithms in different aspects.</p>
13

Analyse du flot de contrôle multivariante : application à la détection de comportements des programmes / Multivariant control flow analysis : application to behavior detection in programs

Laouadi, Rabah 14 December 2016 (has links)
Sans exécuter une application, est-il possible de prévoir quelle est la méthode cible d’un site d’appel ? Est-il possible de savoir quels sont les types et les valeurs qu’une expression peut contenir ? Est-il possible de déterminer de manière exhaustive l’ensemble de comportements qu’une application peut effectuer ? Dans les trois cas, la réponse est oui, à condition d’accepter une certaine approximation. Il existe une classe d’algorithmes − peu connus à l’extérieur du cercle académique − qui analysent et simulent un programme pour calculer de manière conservatrice l’ensemble des informations qui peuvent être véhiculées dans une expression.Dans cette thèse, nous présentons ces algorithmes appelés CFAs (acronyme de Control Flow Analysis), plus précisément l’algorithme multivariant k-l-CFA. Nous combinons l’algorithme k-l-CFA avec l’analyse de taches (taint analysis),qui consiste à suivre une donnée sensible dans le flot de contrôle, afin de déterminer si elle atteint un puits (un flot sortant du programme). Cet algorithme, en combinaison avec l’interprétation abstraite pour les valeurs, a pour objectif de calculer de manière aussi exhaustive que possible l’ensemble des comportements d’une application. L’un des problèmes de cette approche est le nombre élevé de faux-positifs, qui impose un post-traitement humain. Il est donc essentiel de pouvoir augmenter la précision de l’analyse en augmentant k.k-l-CFA est notoirement connu comme étant très combinatoire, sa complexité étant exponentielle dans la valeur de k. La première contribution de cette thèse est de concevoir un modèle et une implémentation la plus efficace possible, en séparant soigneusement les parties statiques et dynamiques de l’analyse, pour permettre le passage à l’échelle. La seconde contribution de cette thèse est de proposer une nouvelle variante de CFA basée sur k-l-CFA, et appelée *-CFA, qui consiste à faire du paramètre k une propriété de chaque variante, de façon à ne l’augmenter que dans les contextes qui le justifient.Afin d’évaluer l’efficacité de notre implémentation de k-l-CFA, nous avons effectué une comparaison avec le framework Wala. Ensuite, nous validons l’analyse de taches et la détection de comportements avec le Benchmark DroidBench. Enfin, nous présentons les apports de l’algorithme *-CFA par rapport aux algorithmes standards de CFA dans le contexte d’analyse de taches et de détection de comportements. / Without executing an application, is it possible to predict the target method of a call site? Is it possible to know the types and values that an expression can contain? Is it possible to determine exhaustively the set of behaviors that an application can perform? In all three cases, the answer is yes, as long as a certain approximation is accepted.There is a class of algorithms - little known outside of academia - that can simulate and analyze a program to compute conservatively all information that can be conveyed in an expression. In this thesis, we present these algorithms called CFAs (Control flow analysis), and more specifically the multivariant k-l-CFA algorithm.We combine k-l-CFA algorithm with taint analysis, which consists in following tainted sensitive data inthe control flow to determine if it reaches a sink (an outgoing flow of the program).This combination with the integration of abstract interpretation for the values, aims to identify asexhaustively as possible all behaviors performed by an application.The problem with this approach is the high number of false positives, which requiresa human post-processing treatment.It is therefore essential to increase the accuracy of the analysis by increasing k.k-l-CFA is notoriously known as having a high combinatorial complexity, which is exponential commensurately with the value of k.The first contribution of this thesis is to design a model and most efficient implementationpossible, carefully separating the static and dynamic parts of the analysis, to allow scalability.The second contribution of this thesis is to propose a new CFA variant based on k-l-CFA algorithm -called *-CFA - , which consists in keeping locally for each variant the parameter k, and increasing this parameter in the contexts which justifies it.To evaluate the effectiveness of our implementation of k-l-CFA, we make a comparison with the Wala framework.Then, we do the same with the DroidBench benchmark to validate out taint analysis and behavior detection. Finally , we present the contributions of *-CFA algorithm compared to standard CFA algorithms in the context of taint analysis and behavior detection.
14

Advanced Memory Data Structures for Scalable Event Trace Analysis

Knüpfer, Andreas 17 April 2009 (has links) (PDF)
The thesis presents a contribution to the analysis and visualization of computational performance based on event traces with a particular focus on parallel programs and High Performance Computing (HPC). Event traces contain detailed information about specified incidents (events) during run-time of programs and allow minute investigation of dynamic program behavior, various performance metrics, and possible causes of performance flaws. Due to long running and highly parallel programs and very fine detail resolutions, event traces can accumulate huge amounts of data which become a challenge for interactive as well as automatic analysis and visualization tools. The thesis proposes a method of exploiting redundancy in the event traces in order to reduce the memory requirements and the computational complexity of event trace analysis. The sources of redundancy are repeated segments of the original program, either through iterative or recursive algorithms or through SPMD-style parallel programs, which produce equal or similar repeated event sequences. The data reduction technique is based on the novel Complete Call Graph (CCG) data structure which allows domain specific data compression for event traces in a combination of lossless and lossy methods. All deviations due to lossy data compression can be controlled by constant bounds. The compression of the CCG data structure is incorporated in the construction process, such that at no point substantial uncompressed parts have to be stored. Experiments with real-world example traces reveal the potential for very high data compression. The results range from factors of 3 to 15 for small scale compression with minimum deviation of the data to factors &amp;gt; 100 for large scale compression with moderate deviation. Based on the CCG data structure, new algorithms for the most common evaluation and analysis methods for event traces are presented, which require no explicit decompression. By avoiding repeated evaluation of formerly redundant event sequences, the computational effort of the new algorithms can be reduced in the same extent as memory consumption. The thesis includes a comprehensive discussion of the state-of-the-art and related work, a detailed presentation of the design of the CCG data structure, an elaborate description of algorithms for construction, compression, and analysis of CCGs, and an extensive experimental validation of all components. / Diese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der &amp;quot;Vollständigen Aufruf-Graphen&amp;quot; (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren &amp;gt; 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile.
15

A Framework for Call Graph Construction

Honar, Elnaz, Mortazavi Jahromi, Seyed AmirHossein January 2010 (has links)
In object oriented programming, a Call Graph represents the calling relationships between the program’s methods. To be more precise, a Call Graph is a rooted directed graph where each node of the graph represents a method and each edge (u, v) represents a method call from method u to method v. Focus of this thesis is on building a framework for Call Graph construction algorithms which can be used in program analysis. Our framework is able to be initialized by different front-ends and constructs various Call Graph algorithms. Here, we instantiate framework with two bytecode readers (ASM and Soot) as front-ends and implement three Call Graph construction algorithms (CHA, RTA and CTA). At first, we used two above mentioned bytecode readers to read the bytecode of a specific Java program, then we found reachable methods for each invoked method; meanwhile we kept obtained details on our own data structures.  Creating data structures for storing required information about Classes, Methods, Fields and Statements, gives us a great opportunity to implement an independent framework for applying well known Call Graph algorithms. As a result of these data structures, Call Graph construction will not depend on bytecode readers; since, whenever we read the bytecode of a program, we accumulate all necessary points in pre-defined data structures and implement our Call Graphs based on this accumulated data. Finally, the result is a framework for different Call Graph construction algorithms which is the goal of this thesis. We tested and evaluated the algorithms with a variety of programs as the benchmark and compared the bytecode readers besides the three Call Graph algorithms in different aspects.
16

Advanced Memory Data Structures for Scalable Event Trace Analysis

Knüpfer, Andreas 16 December 2008 (has links)
The thesis presents a contribution to the analysis and visualization of computational performance based on event traces with a particular focus on parallel programs and High Performance Computing (HPC). Event traces contain detailed information about specified incidents (events) during run-time of programs and allow minute investigation of dynamic program behavior, various performance metrics, and possible causes of performance flaws. Due to long running and highly parallel programs and very fine detail resolutions, event traces can accumulate huge amounts of data which become a challenge for interactive as well as automatic analysis and visualization tools. The thesis proposes a method of exploiting redundancy in the event traces in order to reduce the memory requirements and the computational complexity of event trace analysis. The sources of redundancy are repeated segments of the original program, either through iterative or recursive algorithms or through SPMD-style parallel programs, which produce equal or similar repeated event sequences. The data reduction technique is based on the novel Complete Call Graph (CCG) data structure which allows domain specific data compression for event traces in a combination of lossless and lossy methods. All deviations due to lossy data compression can be controlled by constant bounds. The compression of the CCG data structure is incorporated in the construction process, such that at no point substantial uncompressed parts have to be stored. Experiments with real-world example traces reveal the potential for very high data compression. The results range from factors of 3 to 15 for small scale compression with minimum deviation of the data to factors &amp;gt; 100 for large scale compression with moderate deviation. Based on the CCG data structure, new algorithms for the most common evaluation and analysis methods for event traces are presented, which require no explicit decompression. By avoiding repeated evaluation of formerly redundant event sequences, the computational effort of the new algorithms can be reduced in the same extent as memory consumption. The thesis includes a comprehensive discussion of the state-of-the-art and related work, a detailed presentation of the design of the CCG data structure, an elaborate description of algorithms for construction, compression, and analysis of CCGs, and an extensive experimental validation of all components. / Diese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der &amp;quot;Vollständigen Aufruf-Graphen&amp;quot; (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren &amp;gt; 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile.
17

Funqual: User-Defined, Statically-Checked Call Graph Constraints in C++

Nelson, Andrew P 01 June 2018 (has links) (PDF)
Static analysis tools can aid programmers by reporting potential programming mistakes prior to the execution of a program. Funqual is a static analysis tool that reads C++17 code ``in the wild'' and checks that the function call graph follows a set of rules which can be defined by the user. This sort of analysis can help the programmer to avoid errors such as accidentally calling blocking functions in time-sensitive contexts or accidentally allocating memory in heap-sensitive environments. To accomplish this, we create a type system whereby functions can be given user-defined type qualifiers and where users can define their own restrictions on the call graph based on these type qualifiers. We demonstrate that this tool, when used with hand-crafted rules, can catch certain types of errors which commonly occur in the wild. We claim that this tool can be used in a production setting to catch certain kinds of errors in code before that code is even run.
18

Defect Localization using Dynamic Call Tree Mining and Matching and Request Replication: An Alternative to QoS-aware Service Selection

Yousefi, Anis 04 1900 (has links)
<p>This thesis is concerned with two separate subjects; (i) Defect localization using tree mining and tree matching, and (ii) Quality-of-service-aware service selection; it is divided into these parts accordingly.</p> / <p>This thesis is concerned with two separate subjects; (i) Defect localization using tree mining and tree matching, and (ii) Quality-of-service-aware service selection; it is divided into these parts accordingly.</p> <p>In the first part of this thesis we present a novel technique for defect localization which is able to localize call-graph-affecting defects using tree mining and tree matching techniques. In this approach, given a set of successful executions and a failing execution and by following a series of analyses we generate an extended report of suspicious method calls. The proposed defect localization technique is implemented as a prototype and evaluated using four subject programs of various sizes, developed in Java or C. Our experiments show comparable results to similar defect localization tools, but unlike most of its counterparts, we do not require the availability of multiple failing executions to localize the defects. We believe that this is a major advantage, since it is often the case that we have only a single failing execution to work with. Potential risks of the proposed technique are also investigated.</p> <p>In the second part of this thesis we present an alternative strategy for service selection in service oriented architecture, which provides better quality services for less cost. The proposed Request Replication technique replicates a client’s request over a number of cheap, low quality services to gain the required quality of service. Following this approach, we also present a number of recommendations about how service providers should advertise non-functional properties of their services.</p> / Doctor of Philosophy (PhD)
19

An Adaptive Recompilation Framework For Rotor And Architectural Support For Online Program Instrumentation

Vaswani, Kapil 08 1900 (has links)
Microsoft Research / Although runtime systems and the dynamic compilation model have revolutionized the process of application development and deployment, the associated performance overheads continue to be a cause for concern and much research. In the first part of this thesis, we describe the design and implementation of an adaptive recompilation framework for Rotor, a shared source implementation of the Common Language Infrastructure (CLI) that can increase program performance through intelligent recompilation decisions and optimizations based on the program's past behavior. Our extensions to Rotor include a low overhead runtime-stack based sampling profiler that identifies program hotspots. A recompilation controller oversees the recompilation process and generates recompilation requests. At the first-level of a multi-level optimizing compiler, code in the intermediate language is converted to an internal intermediate representation and optimized using a set of simple transformations. The compiler uses a fast yet effective linear scan algorithm for register allocation. Hot methods can be instrumented in order to collect basic-block, edge and call-graph profile information. Profile-guided optimizations driven by online profile information are used to further optimize heavily executed methods at the second level of recompilation. An evaluation of the framework using a set of test programs shows that performance can improve by a maximum of 42.3% and by 9% on average. Our results also show that the overheads of collecting accurate profile information through instrumentation to an extent outweigh the benefits of profile-guided optimizations in our implementation, suggesting the need for implementing techniques that can reduce such overheads. A flexible and extensible framework design implies that additional profiling and optimization techniques can be easily incorporated to further improve performance. As previously stated, fine-grained and accurate profile information must be available at low cost for advanced profile-guided optimizations to be effective in online environments. In this second part of this thesis, we propose a generic framework that makes it possible for instrumentation based profilers to collect profile data efficiently, a task that has traditionally been associated with high overheads. The essence of the scheme is to make the underlying hardware aware of instrumentation using a special set of profile instructions and tuned microarchitecture. This not only allows the hardware to provide the runtime with mechanisms to control the profiling activity, but also makes it possible for the hardware itself to optimize the process of profiling in a manner transparent to the runtime. We propose selective instruction dispatch as one possible controlling mechanism that can be used by the runtime to manage the execution of profile instructions and keep profiling overheads under check. We propose profile flag prediction, a hardware optimization that complements the selective dispatch mechanism by not fetching profile instructions when the runtime has turned profiling off. The framework is light-weight and flexible. It eliminates the need for expensive book-keeping, recompilation or code duplication. Our simulations with benchmarks from the SPEC CPU2000 suite show that overheads for call-graph and basic block profiling can be reduced by 72.7% and 52.4% respectively with a negligible loss in accuracy.
20

Capturing JUnit Behavior into Static Programs : Static Testing Framework

Siddiqui, Asher January 2010 (has links)
<p>In this research paper, it evaluates the benefits achievable from static testing framework by analyzing and transforming the <em>JUnit3.8 </em>source code and static execution of transformed code. Static structure enables us to analyze the code statically during creation and execution of test cases. The concept of research is by now well established in static analysis and testing development. The research approach is also increasingly affecting the static testing process and such research oriented work has proved particularly valuable for those of us who want to understand the reflective behavior of <em>JUnit3.8 Framework</em>.</p><p><em> JUnit3.8 Framework</em> uses <em>Java Reflection API</em> to invoke core functionality (test cases creation and execution) dynamically. However, <em>Java Reflection API</em> allows developers to access and modify structure and behavior of a program.  Reflection provides flexible solution for creating test cases and controlling the execution of test cases. Java reflection helps to encapsulate test cases in a single object representing the test suite. It also helps to associate each test method with a test object. Where reflection is a powerful tool to perform potential operations, on the other hand, it limits static analysis. Static analysis tools often cannot work effectively with reflection.</p><p>In order to avoid the reflection, <em>Static Testing Framework</em> provides a static platform to analyze the <em>JUnit3.8</em> source code and transform it into non-reflective version that emulates the dynamic behavior of <em>JUnit3.8</em>. The transformed source code has possible leverage to replace reflection with static code and does same things in an execution environment of <em>Static Testing Framework</em> that reflection does in <em>JUnit3.8</em>. More besides, the transformed code also enables execution environment of <em>Static Testing Framework</em> to run test methods statically. In order to measure the degree of efficiency, the implemented tool is evaluated. The evaluation of <em>Static Testing Framework</em> draws results for different Java projects and these statistical data is compared with <em>JUnit3.8</em> results to measure the effectiveness of <em>Static Testing Framework</em>. As a result of evaluation, <em>STF</em> can be used for static creation and execution of test cases up to <em>JUnit3.8</em> where test cases are not creating within a test class and where real definition of constructors is not required. These problems can be dealt as future work by introducing a middle layer to execute test fixtures for each test method and by generating test classes as per real definition of constructors.</p>

Page generated in 0.4518 seconds