Spelling suggestions: "subject:"5oftware 2analysis"" "subject:"5oftware 3analysis""
1 |
Exploiting structure for scalable software verificationDomagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses.
Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results.
This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers.
More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea.
|
2 |
Language Specific Analysis of State Machine Models of Reactive SystemsZurowska, KAROLINA 25 June 2014 (has links)
Model Driven Development (MDD) is a paradigm introduced to overcome the complexities of modern software
development. In MDD we use models as a primary artifact that is being developed, tested and refined,
with code being a result of code generation. Analysis and verification of models is an important
aspect of MDD paradigm, because they improve understanding of a developed system and enable discovery of
faults early in the development. Even though many analysis methods exist (e.g., model checking, proof
systems), they are not directly applicable in the context of industrial MDD tools such as
IBM Rational Software Architect Real Time Edition (IBM RSA RTE). One of the main reasons for this
inapplicability is the difference between modeling languages used in MDD tools (e.g. UML-RT language in IBM
RSA RTE) and languages used in existing tools. These differences require an implementation of a transformation
from a modeling language to an input language of a tool.
UML-RT as well as other industrial MMD models, cannot be easily translated, if the target languages do not
directly support key model features.
To address this problem we follow a research direction that deviates from the standard approaches and instead of bringing
MDD models to analysis tools, the approach brings analysis "closer" to MDD models. We introduce
analysis of UML-RT models dedicated to this modeling language.
To this end we use a formal internal representation of UML-RT models that preserves the important features of
these models, such as hierarchical structures of components, asynchronous communication and action code.
This provides us with formalized models using straightforward transformation. In addition, this approach
enables the use of MDD-specific abstractions aiming to reduce the size of the state space necessary. To this
end we introduce several MDD-specific types of abstractions for: data (using symbolic execution), structure
and behavior. The work also includes model checking algorithms, which use the modular nature of UML-RT models. The proposed approach is implemented in
a toolset that enables analysis directly of UML-RT models.
We show the results of experiments with UML-RT models developed in-house and obtained
from our industrial partner. / Thesis (Ph.D, Computing) -- Queen's University, 2014-06-24 17:58:05.973
|
3 |
Exploiting structure for scalable software verificationDomagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses.
Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results.
This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers.
More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea.
|
4 |
Exploiting structure for scalable software verificationDomagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses.
Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results.
This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers.
More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea. / Science, Faculty of / Computer Science, Department of / Graduate
|
5 |
Validating Software States Using Reverse ExecutionBoland, Nathaniel Christian 03 May 2022 (has links)
No description available.
|
6 |
Saviorganizuojančių neuroninių tinklų (SOM) sistemų lyginamoji analizė / The comparative analysis of the self-organizing map softwareStefanovič, Pavel 09 July 2010 (has links)
Šiame darbe pateikti ir aprašyti biologinio ir dirbtinio neurono modeliai. Didžiausias dėmesys
skiriamas vieno tipo neuroniniams tinklams – saviorganizuojantiems žemėlapiams (SOM). Darbe
pateiktas jų apmokymas, taip pat pagrindinių sąvokų (epocha, kaimynystės eilė, unifikuotų atstumų
matrica ir kt.), susijusių su SOM neuroniniais tinklais (žemėlapiais), apibrėžimai. Buvo nagrinėtos
keturios saviorganizuojančių neuroninių tinklų sistemos: NeNet, SOM-Toolbox, DataBionic ESOM,
Viscovery SOMine ir Matlab įrankiai „nntool“, „nctool“, kurie naudojami SOM tinklams sukurti ir
apmokyti. Pateikiamos sistemų naudojimosi instrukcijos, norint gauti paprasčiausią SOM žemėlapį.
Matlab aplinkoje sukurta ir darbe aprašyta naują vizualizavimo būdą turinti sistema „Somas“, pateiktas
jos išskirtinumas ir naudojimosi instrukcija. Sistemoje „Somas“ realizuota kita mokymo funkcija nei
kitose minėtose sistemose. Pagrindinis analizuotų sistemų tikslas yra suskirstyti duomenis į klasterius
pagal jų panašumą ir pateikti juos SOM žemėlapyje. Sistemos viena nuo kitos skiriasi duomenų
pateikimu, mokymo taisyklėmis, vizualizavimo galimybėmis, todėl čia aptariami sistemų panašumai ir
skirtumai. Nagrinėti susidarę SOM žemėlapiai ir gautos kvantavimo bei topografinės paklaidos,
analizuojant tris duomenų aibes: irisų, stiklo ir vyno. Kvantavimo ir topografinės paklaidos yra
kiekybiniai vaizdo kokybės įverčiai. Padarytos išvados apie susidariusius klasterius tiriamuose
duomenyse. Naudojant naują sistemą „Somas“... [toliau žr. visą tekstą] / In this master thesis, biologic and artificial neuron models have been described. The focus is selforganizing
maps (SOM). The self-organizing maps are one of types of artificial neural networks. SOM
training as well as the main concepts which need to explain SOM networks (epochs, neighbourhood
size, u-matrix and etc.) have been described. Four systems of self-organizing maps: NeNet, SOMToolbox,
DataBionic ESOM, Viscovery SOMine, and Matlab tools “nntool” and “nctool” have been
analyzed. In the thesis, a system use guide has been presented to make a simple SOM map. A new
system “Somas” that has a new visualisation way has been developed in Matlab. The system has been
described, its oneness has been emphasized, and a use guide is presented. The main target of the SOM
systems is data clustering and their graphical presentation on the self-organizing map. The SOM
systems are different one from other in their interfaces, the data pre-processing, learning rules,
visualization manners, etc. Similarities and differences of the systems have been highlighted here. The
experiments have been carried out with three data sets: iris, glass and wine. The SOM maps, obtained
by each system, have been described and some conclusions on the clusters have been drawn. The
quantization and topographic errors have been analyzed to estimate the quality of the maps obtained.
An investigation has been carried out in the new system “Somas” and system “NeNet” in order to look
how quantization and... [to full text]
|
7 |
Development of a Java Bytecode Front-EndModesto, Francisco January 2009 (has links)
<p>The VizzAnalyzer is a powerful software analysis tool. It is able to extract information from various software representations like source code but also other specifications like UML. The extracted information is input to static analysis of these software projects. One programming language the VizzAnalyzer can extract information from is Java source code.</p><p>Analyzing the source code is sufficient for most of the analysis. But, sometimes it is necessary to analyze compiled classes either because the program is only available in byte-code, or the scope of analysis includes libraries that exist usually in binary form. Thus, being able to extract information from Java byte-code is paramount for the extension of some analyses, e.g., studying the dependecy structure of a project and the libraries it uses.</p><p>Currently, the VizzAnalyzer does not feature information extraction from Java byte-code. To allow, e.g., the analysis of the project dependency structure, we extend the VizzAnalyzer tool with a bytecode front-end that will allow the extraction of information from Java bytecode.</p><p>This thesis describes the design and implementation of the bytecode front-end. After we implemented and integrated the new front-end with the VizzAnalyzer, we are now able to perform new analyses that work on data extracted from both, source- and bytecode.</p>
|
8 |
Εργαλεία για την αξιολόγησης της ποιότητας λογισμικούΚόρδας, Αθανάσιος 12 June 2015 (has links)
Η εργασία ασχολείται με διάφορα εμπορικά εργαλεία αξιολόγησης λογισμικού (τόσο ανοιχτού κώδικα όσο και επί πληρωμή). Επίσης έγιναν δοκιμαστικές αναλύσεις μεγάλων εμπορικών προγραμμάτων και συγκριτικές αξιολογήσεις. Τέλος στα πλαίσια της εργασίας αναπτύχθηκε εργαλείο στατικής ανάλυσης λογισμικού. / This thesis is occupied with various tools for software analysis(both open source and paid tools).Also large software programms have been tested and analyzed and results evaluated and compared.Finally a static software analysis tool has been developed.
|
9 |
Tiekimo proceso kontrolės metodų analizė ir taikymas kuriant informacinę sistemą / Analysis of supply process control methods and implementation in infoemation systemŽideckas, Egidijus 27 May 2004 (has links)
An overall process of globalization influences major changes in companies’ bussiness strategy. Supply chain management (SCM) becomes one of the most prioritetical activities in company. Seeking to survive in hard competition, they must improve their distribution strategies. Spreading Net technology requires flexible company to work directly with suppliers and to be able to react quickly and reasonably to demanding environmental changes. The classic objective of SCM is to be able to have the right products in the right quantities (at the right place) at the right moment at minimal cost. This requires a close integration of effective control functions and SCM to assure correctness and maturity of sending and receiving information, complience with internal company rules and supply strategies. This paper analyses SCM information systems emphasising their principles, pecularities and weaknesses. A new control method of SCM is introduced — „SCM using complex control methods“. It adds content management principles to current SCM control methods. This paper describes system concept, system architecture and generic business process models. To prove effectiveness of this method software was designed, developed and implemented in one big trading company. Additionally quality research of implemented software was made and it showed the effectiveness of applying quality assurence in software implementation process.
|
10 |
Heavyweight Pattern Mining in Attributed Flow GraphsSimoes Gomes, Carolina Unknown Date
No description available.
|
Page generated in 0.045 seconds