Spelling suggestions: "subject:"bistatic analysis"" "subject:"12static analysis""
21 |
Identification and annotation of concurrency design patterns in Java source code using static analysis.Mwebesa, Martin 01 December 2011 (has links)
Concurrent software is quickly becoming a very important facet in Software Engineering due
to numerous advantages, one of which is increased processing speed. Despite it's importance,
concurrent software is fraught with very difficult to detect bugs, for example deadlocks and
data races. Concurrency design patterns were created to o er successfully tried and tested
means to design and develop concurrent software to, amongst other things, minimize the
occurrence of these hard to detect bugs. In this thesis we discuss our novel static analysis
technique to detect these concurrency design patterns in Java source code and identify them
using commented Java annotations. Using our technique the commented Java annotations
are inserted above Java constructs that are not only part of the Java source code but also
make up the various roles that comprise the concurrency design pattern. The identifying
of the concurrency design patterns in the Java source code can aid in their maintenance
later on, by matching the inserted Java annotations to the various Java constructs they are
annotating. Maintaining the concurrency design patterns within the Java source code in
effect aids in maintaining the Java source code error free. / UOIT
|
22 |
Object Histories in JavaNair, Aakarsh 21 April 2010 (has links)
Developers are often faced with the task of implementing new features or diagnosing problems in large software systems. Convoluted control and data flows in large object-oriented software systems, however, make even simple tasks extremely difficult, time-consuming, and frustrating. Specifically, Java programs manipulate objects by adding and removing them from collections and by putting and getting them from other objects' fields. Complex object histories hinder program understanding by forcing software maintainers to track the provenance of objects through their past histories when diagnosing software faults.
In this thesis, we present a novel approach which answers queries about the evolution of objects throughout their lifetime in a program. On-demand answers to object history queries aids the maintenance of large software systems by allowing developers to pinpoint relevant details quickly.
We describe an event-based, flow-insensitive, interprocedural program analysis technique for computing object histories and answering history queries. Our analysis technique identifies all relevant events affecting an object and uses pointer analysis to filter out irrelevant events. It uses prior knowledge of the meanings of methods in the Java collection classes to improve the quality of the histories.
We present the details of our technique and experimental results that highlight the utility of object histories in common programming tasks.
|
23 |
Object Histories in JavaNair, Aakarsh 21 April 2010 (has links)
Developers are often faced with the task of implementing new features or diagnosing problems in large software systems. Convoluted control and data flows in large object-oriented software systems, however, make even simple tasks extremely difficult, time-consuming, and frustrating. Specifically, Java programs manipulate objects by adding and removing them from collections and by putting and getting them from other objects' fields. Complex object histories hinder program understanding by forcing software maintainers to track the provenance of objects through their past histories when diagnosing software faults.
In this thesis, we present a novel approach which answers queries about the evolution of objects throughout their lifetime in a program. On-demand answers to object history queries aids the maintenance of large software systems by allowing developers to pinpoint relevant details quickly.
We describe an event-based, flow-insensitive, interprocedural program analysis technique for computing object histories and answering history queries. Our analysis technique identifies all relevant events affecting an object and uses pointer analysis to filter out irrelevant events. It uses prior knowledge of the meanings of methods in the Java collection classes to improve the quality of the histories.
We present the details of our technique and experimental results that highlight the utility of object histories in common programming tasks.
|
24 |
Collection Disjointness Analysis in JavaChu, Hang January 2011 (has links)
This thesis presents a collection disjointness analysis to find disjointness relations between collections in Java. We define the three types of disjointness relations between collections: must-shared, may-shared and not-may-shared. The collection- disjointness analysis is implemented following the way of a forward data-flow analysis using Soot Java bytecode analysis framework. For method calls, which are usually difficult to analyze in static analysis, our analysis provide a way of generating and reading annotations of a method to best approximate the behavior of the calling methods. Finally, this thesis presents the experimental results of the collection-disjointness analysis on several tests.
|
25 |
Integrating the SWEET WCET Analyzer into ARM-GCC with Extra WCFP Information to Enable WCET-Targeted Compiler OptimizationsHao, Wen-Chuan 23 December 2011 (has links)
Finding the worst-case execution time (WCET) on a hard real-time system is extremely important. Only static WCET analysis can give us an upper bound of WCET which guarantees the deadline, however, industrial practice still relies on measurement-based WCET analysis, even for many hard real-time systems; because static analysis tools are not a mature technology.
We use SWEET (SWEdish Execution Time tool) to provide WCET analysis support for the ARM. SWEET is a static WCET analyzer developed by the Mälardalen Real-Time Research Center (MRTC). We modified ARM-GCC to obtain input files in specific format for SWEET: ALF, TCD, and MAP. Besides, for WCET optimization supporting and over-optimizing issue, we modified SWEET to obtain additional worst-case flow path (WCFP) and the second worst-case information.
By testing with benchmark files from [1], our modified ARM-GCC can create correct input files for SWEET, and also the modified SWEET can produce additional worst-case information.
|
26 |
Code Classification Based on Structure SimilarityYang, Chia-hui 14 September 2012 (has links)
Automatically classifying malware variants source code is the most important research issue in the field of digital forensics. By means of malware classification, we can get complete behavior of malware which can simplify the forensics task. In previous researches, researchers use malware binary to perform dynamic analysis or static analysis after reverse engineering. In the other hand, malware developers even use anti-VM and obfuscation techniques try to cheating malware classifiers.
With honeypots are increasingly used, researchers could get more and more malware source code. Analyzing these source codes could be the best way for malware classification. In this paper, a novel classification approach is proposed which based on logic and directory structure similarity of malwares. All collected source code will be classified correctly by hierarchical clustering algorithm. The proposed system not only helps us classify known malwares correctly but also find new type of malware. Furthermore, it avoids forensics staffs spending too much time to reanalyze known malware. And the system could also help realize attacker's behavior and purpose. The experimental results demonstrate the system can classify the malware correctly and be applied to other source code classification aspect.
|
27 |
Programming Language Evolution and Source Code RejuvenationPirkelbauer, Peter Mathias 2010 December 1900 (has links)
Programmers rely on programming idioms, design patterns, and workaround
techniques to express fundamental design not directly supported by the language.
Evolving languages often address frequently encountered problems by adding language
and library support to subsequent releases. By using new features, programmers can
express their intent more directly. As new concerns, such as parallelism or security,
arise, early idioms and language facilities can become serious liabilities. Modern code
sometimes bene fits from optimization techniques not feasible for code that uses less
expressive constructs. Manual source code migration is expensive, time-consuming,
and prone to errors.
This dissertation discusses the introduction of new language features and libraries,
exemplifi ed by open-methods and a non-blocking growable array library. We
describe the relationship of open-methods to various alternative implementation techniques.
The benefi ts of open-methods materialize in simpler code, better performance,
and similar memory footprint when compared to using alternative implementation
techniques.
Based on these findings, we develop the notion of source code rejuvenation, the
automated migration of legacy code. Source code rejuvenation leverages enhanced
program language and library facilities by finding and replacing coding patterns that can be expressed through higher-level software abstractions. Raising the level of
abstraction improves code quality by lowering software entropy. In conjunction with
extensions to programming languages, source code rejuvenation o ers an evolutionary
trajectory towards more reliable, more secure, and better performing code.
We describe the tools that allow us efficient implementations of code rejuvenations.
The Pivot source-to-source translation infrastructure and its traversal mechanism
forms the core of our machinery. In order to free programmers from representation
details, we use a light-weight pattern matching generator that turns a C like
input language into pattern matching code. The generated code integrates seamlessly
with the rest of the analysis framework.
We utilize the framework to build analysis systems that find common workaround
techniques for designated language extensions of C 0x (e.g., initializer lists). Moreover,
we describe a novel system (TACE | template analysis and concept extraction)
for the analysis of uninstantiated template code. Our tool automatically extracts
requirements from the body of template functions. TACE helps programmers understand
the requirements that their code de facto imposes on arguments and compare
those de facto requirements to formal and informal specifications.
|
28 |
Analysis of Davit Structure with a Telescopic Arm of a ShipChen, Hong-long 20 July 2007 (has links)
Abstract
Maritime transportation is important to national development because of Taiwan is in surrounding seas region. Therefore it is necessary that this research aimed to safety of the crane transportation system.
This research investigated davit structure with a telescopic arm of a ship by means of the static analysis and modal analysis. To achieve the purpose, the researcher used the computer-aided design software Solidworks to set up this structure model. After that, he used finite element analysis software ANSYS to analyze the data.
This research simulated in three situations. In static analysis, the researcher found the maximum displacement, the maximum von Mises stress and factors for safety of the structure in 16 dimensional sets in 53¢X of inclination at the davit with the telescopic arm being the shortest. After that, he discussed its tendency situation. In addition, he also checked the original structure in 53¢X of inclination at the davit with the telescopic arm being the longest and horizontal at the davit with the telescopic arm being the longest, respectively. In modal analysis, the researcher found natural frequencies and vibration shapes of the original structure. The structure had good vibration-proof ability and the resonance effect possibility not to be high. Generally, the researcher hoped that this study could provide helpful references for the relevant davit structure designers in the future.
|
29 |
Towards a Framework for Static Analysis Based on Points-to InformationEdvinsson, Marcus January 2007 (has links)
<p>Static analysis on source code or binary code retrieves information about a software program. In object-oriented languages, static points-to analysis retrieves information about objects and how they refer to each other. The result of the points-to analysis is traditionally used to perform optimizations in compilers, such as static resolution of polymorphic calls, and dead-code elimination. More advanced optimizations have been suggested specifically for Java, such as synchronization removal and stack-allocation of objects. Recently, software engineering tools using points-to analysis have appeared aiming to help the developer to understand and to debug software. Altogether, there is a great variety of tools that use or could use points-to analysis, both from academia and from industry.</p><p>We aim to construct a framework that supports the development of new and the improvement of existing clients to points-to analysis result. We present two client analyses and investigate the similarities and differences they have. The client analyses are the escape analysis and the side-effects analysis. The similarities refer to data structures and basic algorithms that both depend on. The differences are found in the way the two analyses use the data structures and the basic algorithms. In order to reuse these in a framework, a specification language is needed to reflect the differences. The client analyses are implemented, with shared data-structures and basic algorithms, but do not use a separate specification language.</p><p>The framework is evaluated against three goal criteria, development speed, analysis precision, and analysis speed. The development speed is ranked as most important, and the two latter are considered equally important. Thereafter we present related work and discuss it with respect to the goal criteria.</p><p>The evaluation of the framework is done in two separate experiments. The first experiment evaluates development speed and shows that the framework enables higher development speed compared to not using the framework. The second experiment evaluates the precision and the speed of the analyses and it shows that the different precisions in the points-to analysis are reflected in the precisions of the client analyses. It also shows that there is a trade-off between analysis precision and analysis speed to consider when choosing analysis precision.</p><p>Finally, we discuss four alternative ways to continue the research towards a doctoral thesis.</p>
|
30 |
A Framework for Software Security Testing and EvaluationDutta, Rahul Kumar January 2015 (has links)
Security in automotive industry is a thought of concern these days. As more smart electronic devices are getting connected to each other, the dependency on these devices are urging us to connect them with moving objects such as cars, buses, trucks etc. As such, safety and security issues related to automotive objects are becoming more relevant in the realm of internet connected devices and objects. In this thesis, we emphasize on certain factors that introduces security vulnerabilities in the implementation phase of Software Development Life Cycle (SDLC). Input invalidation is one of them that we address in our work. We implement a security evaluation framework that allows us to improve security in automotive software by identifying and removing software security vulnerabilities that arise due to input invalidation reasons during SDLC. We propose to use this framework in the implementation and testing phase so that the critical deficiencies of software in security by design issues could be easily addressed and mitigated.
|
Page generated in 0.0864 seconds