Spelling suggestions: "subject:"change 1mpact 2analysis"" "subject:"change 1mpact 3analysis""
1 |
Towards a holistic framework for software artefact consistency managementPete, Ildiko January 2017 (has links)
A software system is represented by different software artefacts ranging from requirements specifications to source code. As the system evolves, artefacts are often modified at different rates and times resulting in inconsistencies, which in turn can hinder effective communication between stakeholders, and the understanding and maintenance of systems. The problem of the differential evolution of heterogeneous software artefacts has not been sufficiently addressed to date as current solutions focus on specific sets of artefacts and aspects of consistency management and are not fully automated. This thesis presents the concept of holistic artefact consistency management and a proof-of-concept framework, ACM, which aim to support the consistent evolution of heterogeneous software artefacts while minimising the impact on user choices and practices and maximising automation. The ACM framework incorporates traceability, change impact analysis, change detection, consistency checking and change propagation mechanisms and is designed to be extensible. The thesis describes the design, implementation and evaluation of the framework, and an approach to automate trace link creation using machine learning techniques. The framework evaluation uses six open source systems and suggests that managing the consistency of heterogeneous artefacts may be feasible in practical scenarios.
|
2 |
Change Impact Analysis in Simulink Designs of Embedded SystemsMackenzie, Bennett January 2019 (has links)
This thesis presents the \emph{Boundary Diagram Tool}, a tool for change impact
analysis of large Simulink designs of embedded systems. The Boundary Diagram Tool extends
the Reach/Coreach Tool, an existing tool for model slicing
within a single Simulink model, to trace the impact of model changes through
multiple Simulink models and to network
interfaces of an automotive controller. While the change impact analysis results can be viewed directly within the Simulink models, the tool also
uses various block diagrams to represent the impact analysis results with different levels of abstraction, motivated by industrial needs. In order to effectively present the complex impact analysis results, various techniques for visual representation of large graphs are employed.
Furthermore, the Reach/Coreach Tool as an underlying model slicing engine was significantly improved. The Boundary Diagram Tool is currently being integrated
into the software development process of a large automotive
OEM (Original Equipment Manufacturer). It provides support during several phases of the change management process: change request analysis and
evaluation, as well as the implementation, verification and integration of software changes. The tool
also aids impact analyses required for compliance with functional
safety standards such as ISO 26262. / Thesis / Master of Applied Science (MASc)
|
3 |
An empirical study on object-oriented software dependencies : logical, structural and semanticAjienka, Nemitari Miebaka January 2018 (has links)
Three of the most widely studied software dependency types are the structural, logical and semantic dependencies. Logical dependencies capture the degree of co-change between software artifacts. Semantic dependencies capture the degree to which artifacts, comments and names are related. Structural dependencies capture the dependencies in the source code of artifacts. Prior studies show that a combination of dependency analysis (e.g., semantic and logical analysis) improves accuracy when predicting which artifacts are likely to be impacted by ripple effects of software changes (though not to a large extent) compared to individual approaches. In addition, some dependencies could be hidden dependencies when an analysis of one dependency type (e.g., logical) does not reveal artifacts only linked by another dependency type (semantic). While previous studies have focused on combining dependency information with minimal benefits, this Thesis explores the consistency of these measurements, and whether hidden dependencies arise between artifacts, and in any of the axes studied. In this Thesis, 79 Java projects are empirically studied to investigate (i) the direct influence and the degree of overlap between dependency types on three axes (logical - structural (LSt); logical - semantic (LSe); structural - semantic (StSe)) (structural, logical and semantic), and (ii) the presence of hidden coupling on the axes. The results show that a high proportion of hidden dependencies can be detected on the LSt and StSe axes. Notwithstanding, the LSe axis shows a much smaller proportion of hidden dependencies. Practicable refactoring methods to mitigate hidden dependencies are proposed in the Thesis and discussed with examples.
|
4 |
Package Dependencies Analysis and Remediation in Object-Oriented SystemsLaval, Jannik 17 June 2011 (has links) (PDF)
Les logiciels évoluent au fil du temps avec la modification, l'ajout et la suppression de nouvelles classes, méthodes, fonctions, dépendances. Une conséquence est que le comportement peut être placé dans de mauvais paquetages et casser la modularité du logiciel. Une bonne organisation des classes dans des paquetages identifiables facilite la compréhension, la maintenance, les tests et l'évolution des logiciels. Nous soutenons que les responsables manquent d'outils pour assurer la remodularisation logicielle. La maintenance des logiciels nécessite des approches qui aident à (i) la compréhension de la structure au niveau du paquetage et l'évaluation de sa qualité; (ii) l'identification des problèmes de modularité, et (iii) la prise de décisions pour le changement. Dans cette thèse nous proposons ECOO, une approche qui aide la remodularisation. Elle concerne les trois domaines de recherche suivants: (i) Comprendre les problèmes de dépendance entre paquetages. Nous proposons des visualisations mettant en évidence les dépendances cycliques au niveau des paquetages; (ii) Proposer des dépendances qui devraient être changées. L'approche propose des dépendances à changer pour rendre le système plus modulaire; (iii) Analyser l'impact des changements. L'approche propose une analyse d'impact du changement pour essayer les modifications avant de les appliquer sur le système réel. L'approche présentée dans cette thèse a été validée qualitativement et les résultats ont été pris en compte dans la réingénierie des systèmes analysés. Les résultats obtenus démontrent l'utilité de notre approche.
|
5 |
Ratchet : a prototype change-impact analysis tool with dynamic test selection for C++ codeAsenjo, Alejandro 17 June 2011 (has links)
Understanding the impact of changes made daily by development teams working on large-scale software products is a challenge faced by many organizations nowadays. Development efficiency can be severely affected by the increase in fragility that can creep in as products evolve and become more complex. Processes, such as gated check-in mechanisms, can be put in place to detect problematic changes before submission, but are usually limited in effectiveness due to their reliance on statically-defined sets of tests. Traditional change-impact analysis techniques can be combined with information gathered at run-time in order to create a system that can select tests for change verification. This report provides the high-level architecture of a system, named Ratchet, that combines static analysis of C++ programs, enabled by the reuse of the Clang compiler frontend, and code-coverage information gathered from automated test runs, in order to automatically select and schedule tests that exercise functions and methods possibly affected by the change. Prototype implementations of the static-analysis components of the system are provided, along with a basic evaluation of their capabilities through synthetic examples. / text
|
6 |
Closing the Defect Reduction Gap between Software Inspection and Test-Driven Development: Applying Mutation Analysis to Iterative, Test-First ProgrammingWilkerson, Jerod W. January 2008 (has links)
The main objective of this dissertation is to assist in reducing the chaotic state of the software engineering discipline by providing insights into both the effectiveness of software defect reduction methods and ways these methods can be improved. The dissertation is divided into two main parts. The first is a quasi-experiment comparing the software defect rates and initial development costs of two methods of software defect reduction: software inspection and test-driven development (TDD). Participants, consisting of computer science students at the University of Arizona, were divided into four treatment groups and were asked to complete the same programming assignment using either TDD, software inspection, both, or neither. Resulting defect counts and initial development costs were compared across groups. The study found that software inspection is more effective than TDD at reducing defects, but that it also has a higher initial cost of development. The study establishes the existence of a defect-reduction gap between software inspection and TDD and highlights the need to improve TDD because of its other benefits.The second part of the dissertation explores a method of applying mutation analysis to TDD to reduce the defect reduction gap between the two methods and to make TDD more reliable and predictable. A new change impact analysis algorithm (CHA-AS) based on CHA is presented and evaluated for applications of software change impact analysis where a predetermined set of program entry points is not available or is not known. An estimated average case complexity analysis indicates that the algorithm's time and space complexity is linear in the size of the program under analysis, and a simulation experiment indicates that the algorithm can capitalize on the iterative nature of TDD to produce a cost savings in mutation analysis applied to TDD projects. The algorithm should also be useful for other change impact analysis situations with undefined program entry points such as code library and framework development.An enhanced TDD method is proposed that incorporates mutation analysis, and a set of future research directions are proposed for developing tools to support mutation analysis enhanced TDD and to continue to improve the TDD method.
|
7 |
Using Machine Learning and Graph Mining Approaches to Improve Software Requirements Quality: An Empirical InvestigationSingh, Maninder January 2019 (has links)
Software development is prone to software faults due to the involvement of multiple stakeholders especially during the fuzzy phases (requirements and design). Software inspections are commonly used in industry to detect and fix problems in requirements and design artifacts, thereby mitigating the fault propagation to later phases where the same faults are harder to find and fix. The output of an inspection process is list of faults that are present in software requirements specification document (SRS). The artifact author must manually read through the reviews and differentiate between true-faults and false-positives before fixing the faults. The first goal of this research is to automate the detection of useful vs. non-useful reviews. Next, post-inspection, requirements author has to manually extract key problematic topics from useful reviews that can be mapped to individual requirements in an SRS to identify fault-prone requirements. The second goal of this research is to automate this mapping by employing Key phrase extraction (KPE) algorithms and semantic analysis (SA) approaches to identify fault-prone requirements. During fault-fixations, the author has to manually verify the requirements that could have been impacted by a fix. The third goal of my research is to assist the authors post-inspection to handle change impact analysis (CIA) during fault fixation using NL processing with semantic analysis and mining solutions from graph theory. The selection of quality inspectors during inspections is pertinent to be able to carry out post-inspection tasks accurately. The fourth goal of this research is to identify skilled inspectors using various classification and feature selection approaches. The dissertation has led to the development of automated solution that can identify useful reviews, help identify skilled inspectors, extract most prominent topics/keyphrases from fault logs; and help RE author during the fault-fixation post inspection.
|
8 |
Filtering equivalent changes from dependency updates with CBMCMårtensson, Jonas January 2022 (has links)
Background. Open source dependencies have become ubiquitous in software development and the risk of regressions during an update are a key concern facing developers. Change impact analysis (CIA) can be used to assess the effects of a dependency update and aid in addressing this challenge. The manual effort required for CIA has created a need to reduce the amount of data that is considered during a compatibility assessment. Formal (mathematical) methods for equivalence analysis have been prolific in previous attempts at minimizing the amount of data that needs to be analyzed. C bounded model checker (CBMC) is an established tool that can perform equivalence verification and a gap in knowledge exists regarding its usefulness for assessing update compatibility. Objectives. The objective of the study was to evaluate how well CBMC could filter out equivalent changes from impact assessments and the relevance of this for dependency updates. A tool named Equivalent update filter (EUF) was developed in the study to tackle this problem. Effectiveness of the tool was assessed based on, (1) the size of reductions that were made possible through filtering, (2) the relevance of the auto-generated verification resources created to perform analysis and (3) the correctness of the results during equivalence analysis. Methods. To assess the reduction capabilities of EUF a controlled experiment regarding the effect of CBMC based equivalence analysis upon impact assessment sizes was conducted. Updates for the experiment were derived from random commit pairs among three C dependencies with established industry use. The relevance of EUF's auto-generated verification resources were measured through an ordinal scale that highlighted the prevalence of different properties in a dependency that would prevent sound equivalence analysis. Soundness of the reductions suggested by EUF was investigated through a comparison with a manually labeled set of updates. Results. The developed filtering approach was able decrease impact assessment sizes by 1 % on average. Considerable differences were observed between the dependencies in the study in regards to analysis time. For each update, 11 % of the auto-generated verification resources were found to be useful for equivalence analysis on average.EUF's classification of equivalent changes was measured to have an accuracy of 67 % in relation to the base truth of manually labeled updates. Conclusions. The study showed that EUF and by extension, CBMC based equivalence analysis, has potential to be useful in dependency compatibility assessments. Follow up studies on different verification engines and with improved methodologies would be necessary to motivate practical use. / Bakgrund. Att använda externa bibliotek med öppen källkod är praxis inom mjukvaruutveckling och risken för uppdateringar att introducera problem är ett betydande orosmoment för utvecklare. Konsekvensanalys, "Change impact analysis" (CIA), kan användas för att utvärdera effekten av en uppdatering och bemöta denna utmaning. Den manuella interaktion som krävs för CIA har medfört att mängden data som analyseras behöver begränsas. Formella (matematiska) metoder för ekvivalensanalys har varit centrala i tidigare försök att minimera analysbehov. "C bounded model checker" (CBMC) är ett etablerat verktyg för C som kan utföra ekvivalensanalys och tidigare forskning har inte studerat dess relevans för kompatibilitetsbedömning vid beroende uppdateringar. Syfte. Denna studies syfte var att utvärdera CBMC's förmåga att filtrera bort ekvivalenta ändringar från uppdateringar av externa beroenden. Verktyget "Equivalent update filter" (EUF) utvecklades under projektet för att uppnå syftet. Effektiviteten av EUF bedömdes med hjälp av tre kriterier, (1) storleken på de reduktioner som möjliggjordes av filtrering, (2) relevansen av de autogenererade resurser som skapades för att utföra ekvivalensanalys och (3) korrektheten av de resultat som erhölls från ekvivalensanalys. Metod. EUF's reduktionsförmåga undersöktes genom ett kontrollerat experiment där inverkan av CBMC baserad ekvivalensanalys på analysbehovet för olika uppdateringar analyserades. Uppdateringarna som användes för experimentet hämtades från tre olika C bibliotek med bred användning i industrin. Värdet av EUF's autogenererade resurser studerades i relation till hur ofta de medförde negativa konsekvenser på korrektheten av ekvivalensanalysen. Korrektheten hos de reduktioner som EUF utförde under experimenten mättes genom en jämförelse med ett antal manuellt klassificerade uppdateringar. Resultat. Den utvecklade filtreringsmetoden hade en förmåga att minska mängden data som behöver analyseras med 1 % i genomsnitt. Exekveringstiden varierade kraftigt mellan de olika bibliotek som testades. För varje uppdatering så bedömdes i genomsnitt 11 % av de autogenererade verifieringsresurserna vara användbara för ekvivalensanalys. EUF's detektion av ekvivalenta och icke-ekvivalenta förändringar visades ha en träffsäkerhet på 67 % i relation till manuella klassificeringar. Slutsatser. Studien fann att EUF och i förlängningen, CBMC baserad ekvivalensanalys, har potential att vara en användbar del av kompatibilitetsbedömningar för beroende uppdateringar. En uppföljningsstudie med andra verifikationsverktyg och förbättrade metoder hade varit nödvändiga för att motivera praktisk användning.
|
9 |
Efficient Symbolic Execution of Concurrent SoftwareGuo, Shengjian 26 April 2019 (has links)
Concurrent software has been widely utilizing in computer systems owing to the highly efficient computation. However, testing and verifying concurrent software remain challenging tasks. This matter is not only because of the non-deterministic thread interferences which are hard to reason about but also because of the large state space due to the simultaneous path and interleaving explosions. That is, the number of program paths in each thread may be exponential in the number of branch conditions, and also, the number of thread interleavings may be exponential in the number of concurrent operations. This dissertation presents a set of new methods, built upon symbolic execution, a program analysis technique that systematically explores program state space, for testing concurrent programs. By modeling both functional and non-functional properties of the programs as assertions, these new methods efficiently analyze the viable behaviors of the given concurrent programs. The first method is assertion guided symbolic execution, a state space reduction technique that identifies and eliminates redundant executions w.r.t the explored interleavings. The second method is incremental symbolic execution, which generates test inputs only for the influenced program behaviors by the small code changes between two program versions. The third method is SYMPLC, a technique with domain-specific reduction strategies for generating tests for the multitasking Programmable Logic Controller (PLC) programs written in languages specified by the IEC 61131-3 standard. The last method is adversarial symbolic execution, a technique for detecting concurrency related side-channel information leaks by analyzing the cache timing behaviors of a concurrent program in symbolic execution. This dissertation evaluates the proposed methods on a diverse set of both synthesized programs and real-world applications. The experimental results show that these techniques can significantly outperform state-of-the-art symbolic execution tools for concurrent software. / Doctor of Philosophy / Software testing is a technique that runs software as a black-box on computer hardware multiple times, with different inputs per run, to test if the software behavior conforms to the designed functionality by developers. Nowadays, programmers have been increasingly developing multithreaded and multitasking software, e.g., web browser and web server, to utilize the highly efficient multiprocessor hardware. This approach significantly improves the software performance since a large computing job can now decompose to a set of small jobs which can then distribute to concurrently running threads (tasks). However, testing multithreaded (multitask) software is extremely challenging. The most critical problem is the inherent non-determinism. Typically, executing sequential software with the same input data always results in the same output. However, running a multithreaded (multitask) software multiple times, even under the same input data, may yield different output in each run. The root reason is that concurrent threads (tasks) may interleave their running progress at any time; thus the internal software execution order may be altered unexpectedly, causing runtime errors. Meanwhile, finding such faults is difficult, since the number of all possible interleavings can be exponentially growing in the number of concurrent thread (task) operations. This dissertation proposes four methods to test multithreaded/multitask software efficiently. The first method summarizes the already-tested program behaviors to avoid future testing runs that cannot lead to new faults. The second method only tests program behaviors that are impacted by program changes. The third method tests multitask Programmable Logic Controller (PLC) programs by excluding infeasible testing runs w.r.t the PLC semantics. The last method tests non-functional program properties by systematic concurrency analysis. This dissertation evaluates these methods upon a diverse set of benchmarks. The experimental results show that the proposed methods significantly outperform state-of-the-art techniques for concurrent software analysis.
|
10 |
Approche probabiliste pour l’analyse de l’impact des changements dans les programmes orientés objetZoghlami, Aymen 06 1900 (has links)
Nous proposons une approche probabiliste afin de déterminer l’impact des changements dans les programmes à objets. Cette approche sert à prédire, pour un changement donné dans une classe du système, l’ensemble des autres classes potentiellement affectées par ce changement. Cette prédiction est donnée sous la forme d’une probabilité qui dépend d’une part, des interactions entre les classes exprimées en termes de nombre d’invocations et d’autre part, des relations extraites à partir du code source. Ces relations sont extraites automatiquement par rétro-ingénierie. Pour la mise en oeuvre de notre approche, nous proposons une approche basée sur les réseaux bayésiens. Après une phase d’apprentissage, ces réseaux prédisent l’ensemble des classes affectées par un changement. L’approche probabiliste proposée est évaluée avec deux scénarios distincts mettant en oeuvre plusieurs types de changements effectués sur différents systèmes. Pour les systèmes qui possèdent des données historiques, l’apprentissage a été réalisé à partir des anciennes versions. Pour les systèmes dont on ne possède pas assez de données relatives aux changements de ses versions antécédentes, l’apprentissage a été réalisé à l’aide des données extraites d’autres systèmes. / We study the possibility of predicting the impact of changes in object-oriented code
using bayesian networks. For each change type, we produce a bayesian network that determines the probability that a class is impacted given that another class is changed.
Each network takes as input a set of possible relationships between classes. We train our networks using historical data. The proposed impact-prediction approach is evaluated with two different scenarios, various types of changes, and five systems. In the first scenario, we use as training data, the changes performed in the previous versions of the same system. In the second scenario training data is borrowed from systems that are different from the changed one. Our evaluation showed that, in both cases, we obtain very good predictions, even though they are better in the first scenario.
|
Page generated in 0.043 seconds