Spelling suggestions: "subject:"engineering software"" "subject:"ingineering software""
41 |
A Requirements-Based Partition Testing Framework Using Particle Swarm Optimization TechniqueGanjali, Afshar January 2008 (has links)
Modern society is increasingly dependent on the quality of software systems. Software failure can cause severe consequences, including loss of human life. There are various ways of fault prevention and detection that can be deployed in different stages of software development. Testing is the most widely used approach for ensuring software quality.
Requirements-Based Testing and Partition Testing are two of the widely used approaches for testing software systems. Although both of these techniques are mature and are addressed widely in the literature and despite the general agreement on both of these key techniques of functional testing, a combination of them lacks a systematic approach. In this thesis, we propose a framework along with a procedural process for testing a system using Requirements-Based Partition Testing (RBPT). This framework helps testers to start from the requirements documents and follow a straightforward step by step process to generate the required test cases without loosing any required data. Although many steps of the process are manual, the framework can be used as a foundation for automating the whole test case generation process.
Another issue in testing a software product is the test case selection problem. Choosing appropriate test cases is an essential part of software testing that can lead to significant improvements in efficiency, as well as reduced costs of combinatorial testing. Unfortunately, the problem of finding minimum size test sets is NP-complete in general. Therefore, artificial intelligence-based search algorithms have been widely used for generating near-optimal solutions. In this thesis, we also propose a novel technique for test case generation using Particle Swarm Optimization (PSO), an effective optimization tool which has emerged in the last decade. Empirical studies show that in some domains particle swarm optimization is equally well-suited or even better than some other techniques. At the same time, a particle swarm algorithm is much simpler, easier to implement, and has just a few parameters that the user needs to adjust. These properties make PSO an ideal technique for test case generation. In order to have a fair comparison of our newly proposed algorithm against existing techniques, we have designed and implemented a framework for automatic evaluation of these methods. Through experiments using our evaluation framework, we illustrate how this new test case generation technique can outperform other existing methodologies.
|
42 |
Adaptive Monitoring of Complex Software Systems using Management MetricsMunawar, Mohammad Ahmad 30 September 2009 (has links)
Software systems supporting networked, transaction-oriented services are large and complex;
they comprise a multitude of inter-dependent layers and components,
and they implement many dynamic optimization mechanisms.
In addition, these systems are subject to workload that is hard to predict.
These factors make monitoring these systems as well as performing problem determination
challenging and costly.
In this thesis we tackle these challenges with the goal of lowering the cost and
improving the effectiveness of monitoring and problem determination
by reducing the dependence on human operators.
Specifically, this thesis presents and demonstrates the effectiveness of an efficient,
automated monitoring approach which enables detection of errors and failures,
and which assists in localizing faults.
Software systems expose various types of monitoring data;
this thesis focuses on the use of management metrics to monitor a system's health.
We devise a system modeling approach which entails modeling stable,
statistical correlations among management metrics; these correlations
characterize a system's normal behaviour
This approach allows a system model to be built automatically and efficiently
using the monitoring data alone.
In order to control the monitoring overhead, and yet allow a system's health
to be assessed reliably, we design an adaptive monitoring approach.
This adaptive capability builds on the flexible nature of our system modeling approach,
which allows the set of monitored metrics to be altered at runtime.
We develop methods to automatically select management metrics to collect
at the minimal monitoring level, without any domain knowledge.
In addition, we devise an automated fault localization approach,
which leverages the ability of the monitoring system to analyze individual metrics.
Using a realistic, multi-tier software system, including different applications based on
Java Enterprise Edition and industrial-strength products, we evaluate our system modeling approach.
We show that stable metric correlations exist in complex software systems and
that many of these correlations can be modeled using simple, efficient
techniques.
We investigate the effect of the collection of management metrics on system performance.
We show that the monitoring overhead can be high and thus needs to be controlled.
We employ fault injection experiments to evaluate the effectiveness of our
adaptive monitoring and fault localization approach.
We demonstrate that our approach is cost-effective,
has high fault coverage and, in the majority of the cases studied,
provides pertinent diagnosis information.
The main contribution of this work is to show how to monitor complex software systems
and determine problems in them automatically and efficiently.
Our solution approach has wide applicability and the techniques we use are simple
and yet effective.
Our work suggests that the cost of monitoring software systems is not necessarily
a function of their complexity, providing hope that the health of increasingly large and
complex systems can be tracked with a limited amount of human resources and without
sacrificing much system performance.
|
43 |
A Study of Mobility Models in Mobile Surveillance SystemsMiao, Yun-Qian January 2010 (has links)
This thesis explores the role mobile sensor's mobility model and how it affects surveillance system performance in term of area coverage and detection effectiveness. Several algorithms which are categorized into three types, namely, fully coordinated mobility, fully random mobility and emergent mobility models are discussed with their advantages and limitations.
A multi-agent platform to organize mobile sensor nodes, control nodes and actor nodes
was implemented. It demonstrated great flexibility and was favourable for its distributed, autonomous and cooperative problem-solving characters.
Realistic scenarios which are based on three KheperaIII mobile robots and a model which mimics Waterloo regional airport were used to examine the implementation platform and evaluate performance of different mobility algorithms. Several practical issues related
to software configurations and interface library were addressed as by-products.
The experimental results from both simulation and real platform show that the area coverage and the detection effectiveness vary with applying different mobility models. Fully coordinated model's super efficiency comes with carefully task planning and high requirements of sensor navigational accuracy. Fully random model is the least efficient in area coverage and detection because of the repetitive searching of each sensor and among sensors.
A self-organizing algorithm named anti-flocking which mimics solitary animal's social behaviour was first proposed. It works based on quite simple rules for achieving purposeful coordinated group action without explicit global control. Experimental results demonstrate its attractive target detection efficiency in term of both detection rate and detection time while providing desirable features such as scalability, robustness and adaptivity.
From the simulation results, the detection rate of the anti-flocking model increases by 36.5% and average detection time decreases by 46.2% comparing with the fully random motion model. The real platform results also reflect the superior performance improvement.
|
44 |
Cost-Sensitive Boosting for Classification of Imbalanced DataSun, Yanmin 11 May 2007 (has links)
The classification of data with imbalanced class distributions has
posed a significant drawback in the performance attainable by most
well-developed classification systems, which assume relatively
balanced class distributions. This problem is especially crucial
in many application domains, such as medical diagnosis, fraud
detection, network intrusion, etc., which are of great importance
in machine learning and data mining.
This thesis explores meta-techniques which are applicable to most
classifier learning algorithms, with the aim to advance the
classification of imbalanced data. Boosting is a powerful
meta-technique to learn an ensemble of weak models with a promise
of improving the classification accuracy. AdaBoost has been taken
as the most successful boosting algorithm. This thesis starts with
applying AdaBoost to an associative classifier for both learning
time reduction and accuracy improvement. However, the promise of
accuracy improvement is trivial in the context of the class
imbalance problem, where accuracy is less meaningful. The insight
gained from a comprehensive analysis on the boosting strategy of
AdaBoost leads to the investigation of cost-sensitive boosting
algorithms, which are developed by introducing cost items into the
learning framework of AdaBoost. The cost items are used to denote
the uneven identification importance among classes, such that the
boosting strategies can intentionally bias the learning towards
classes associated with higher identification importance and
eventually improve the identification performance on them. Given
an application domain, cost values with respect to different types
of samples are usually unavailable for applying the proposed
cost-sensitive boosting algorithms. To set up the effective cost
values, empirical methods are used for bi-class applications and
heuristic searching of the Genetic Algorithm is employed for
multi-class applications.
This thesis also covers the implementation of the proposed
cost-sensitive boosting algorithms. It ends with a discussion on
the experimental results of classification of real-world
imbalanced data. Compared with existing algorithms, the new
algorithms this thesis presents are superior in achieving better
measurements regarding the learning objectives.
|
45 |
A Bayesian Framework for Software Regression TestingMir arabbaygi, Siavash January 2008 (has links)
Software maintenance reportedly accounts for much of the total cost associated
with developing software. These costs occur because modifying software is a highly
error-prone task. Changing software to correct faults or add new functionality
can cause existing functionality to regress, introducing new faults. To avoid such
defects, one can re-test software after modifications, a task commonly known as
regression testing.
Regression testing typically involves the re-execution of test cases developed for
previous versions. Re-running all existing test cases, however, is often costly and
sometimes even infeasible due to time and resource constraints. Re-running test
cases that do not exercise changed or change-impacted parts of the program carries
extra cost and gives no benefit. The research community has thus sought ways to
optimize regression testing by lowering the cost of test re-execution while preserving
its effectiveness. To this end, researchers have proposed selecting a subset of test
cases according to a variety of criteria (test case selection) and reordering test cases
for execution to maximize a score function (test case prioritization).
This dissertation presents a novel framework for optimizing regression testing
activities, based on a probabilistic view of regression testing. The proposed framework
is built around predicting the probability that each test case finds faults in the
regression testing phase, and optimizing the test suites accordingly. To predict such
probabilities, we model regression testing using a Bayesian Network (BN), a powerful
probabilistic tool for modeling uncertainty in systems. We build this model using
information measured directly from the software system. Our proposed framework
builds upon the existing research in this area in many ways. First, our framework
incorporates different information extracted from software into one model, which
helps reduce uncertainty by using more of the available information, and enables
better modeling of the system. Moreover, our framework provides flexibility by
enabling a choice of which sources of information to use. Research in software
measurement has proven that dealing with different systems requires different techniques
and hence requires such flexibility. Using the proposed framework, engineers
can customize their regression testing techniques to fit the characteristics of their
systems using measurements most appropriate to their environment.
We evaluate the performance of our proposed BN-based framework empirically.
Although the framework can help both test case selection and prioritization, we
propose using it primarily as a prioritization technique. We therefore compare our
technique against other prioritization techniques from the literature. Our empirical
evaluation examines a variety of objects and fault types. The results show that the
proposed framework can outperform other techniques on some cases and performs
comparably on the others.
In sum, this thesis introduces a novel Bayesian framework for optimizing regression
testing and shows that the proposed framework can help testers improve the
cost effectiveness of their regression testing tasks.
|
46 |
A Requirements-Based Partition Testing Framework Using Particle Swarm Optimization TechniqueGanjali, Afshar January 2008 (has links)
Modern society is increasingly dependent on the quality of software systems. Software failure can cause severe consequences, including loss of human life. There are various ways of fault prevention and detection that can be deployed in different stages of software development. Testing is the most widely used approach for ensuring software quality.
Requirements-Based Testing and Partition Testing are two of the widely used approaches for testing software systems. Although both of these techniques are mature and are addressed widely in the literature and despite the general agreement on both of these key techniques of functional testing, a combination of them lacks a systematic approach. In this thesis, we propose a framework along with a procedural process for testing a system using Requirements-Based Partition Testing (RBPT). This framework helps testers to start from the requirements documents and follow a straightforward step by step process to generate the required test cases without loosing any required data. Although many steps of the process are manual, the framework can be used as a foundation for automating the whole test case generation process.
Another issue in testing a software product is the test case selection problem. Choosing appropriate test cases is an essential part of software testing that can lead to significant improvements in efficiency, as well as reduced costs of combinatorial testing. Unfortunately, the problem of finding minimum size test sets is NP-complete in general. Therefore, artificial intelligence-based search algorithms have been widely used for generating near-optimal solutions. In this thesis, we also propose a novel technique for test case generation using Particle Swarm Optimization (PSO), an effective optimization tool which has emerged in the last decade. Empirical studies show that in some domains particle swarm optimization is equally well-suited or even better than some other techniques. At the same time, a particle swarm algorithm is much simpler, easier to implement, and has just a few parameters that the user needs to adjust. These properties make PSO an ideal technique for test case generation. In order to have a fair comparison of our newly proposed algorithm against existing techniques, we have designed and implemented a framework for automatic evaluation of these methods. Through experiments using our evaluation framework, we illustrate how this new test case generation technique can outperform other existing methodologies.
|
47 |
Adaptive Monitoring of Complex Software Systems using Management MetricsMunawar, Mohammad Ahmad 30 September 2009 (has links)
Software systems supporting networked, transaction-oriented services are large and complex;
they comprise a multitude of inter-dependent layers and components,
and they implement many dynamic optimization mechanisms.
In addition, these systems are subject to workload that is hard to predict.
These factors make monitoring these systems as well as performing problem determination
challenging and costly.
In this thesis we tackle these challenges with the goal of lowering the cost and
improving the effectiveness of monitoring and problem determination
by reducing the dependence on human operators.
Specifically, this thesis presents and demonstrates the effectiveness of an efficient,
automated monitoring approach which enables detection of errors and failures,
and which assists in localizing faults.
Software systems expose various types of monitoring data;
this thesis focuses on the use of management metrics to monitor a system's health.
We devise a system modeling approach which entails modeling stable,
statistical correlations among management metrics; these correlations
characterize a system's normal behaviour
This approach allows a system model to be built automatically and efficiently
using the monitoring data alone.
In order to control the monitoring overhead, and yet allow a system's health
to be assessed reliably, we design an adaptive monitoring approach.
This adaptive capability builds on the flexible nature of our system modeling approach,
which allows the set of monitored metrics to be altered at runtime.
We develop methods to automatically select management metrics to collect
at the minimal monitoring level, without any domain knowledge.
In addition, we devise an automated fault localization approach,
which leverages the ability of the monitoring system to analyze individual metrics.
Using a realistic, multi-tier software system, including different applications based on
Java Enterprise Edition and industrial-strength products, we evaluate our system modeling approach.
We show that stable metric correlations exist in complex software systems and
that many of these correlations can be modeled using simple, efficient
techniques.
We investigate the effect of the collection of management metrics on system performance.
We show that the monitoring overhead can be high and thus needs to be controlled.
We employ fault injection experiments to evaluate the effectiveness of our
adaptive monitoring and fault localization approach.
We demonstrate that our approach is cost-effective,
has high fault coverage and, in the majority of the cases studied,
provides pertinent diagnosis information.
The main contribution of this work is to show how to monitor complex software systems
and determine problems in them automatically and efficiently.
Our solution approach has wide applicability and the techniques we use are simple
and yet effective.
Our work suggests that the cost of monitoring software systems is not necessarily
a function of their complexity, providing hope that the health of increasingly large and
complex systems can be tracked with a limited amount of human resources and without
sacrificing much system performance.
|
48 |
A Study of Mobility Models in Mobile Surveillance SystemsMiao, Yun-Qian January 2010 (has links)
This thesis explores the role mobile sensor's mobility model and how it affects surveillance system performance in term of area coverage and detection effectiveness. Several algorithms which are categorized into three types, namely, fully coordinated mobility, fully random mobility and emergent mobility models are discussed with their advantages and limitations.
A multi-agent platform to organize mobile sensor nodes, control nodes and actor nodes
was implemented. It demonstrated great flexibility and was favourable for its distributed, autonomous and cooperative problem-solving characters.
Realistic scenarios which are based on three KheperaIII mobile robots and a model which mimics Waterloo regional airport were used to examine the implementation platform and evaluate performance of different mobility algorithms. Several practical issues related
to software configurations and interface library were addressed as by-products.
The experimental results from both simulation and real platform show that the area coverage and the detection effectiveness vary with applying different mobility models. Fully coordinated model's super efficiency comes with carefully task planning and high requirements of sensor navigational accuracy. Fully random model is the least efficient in area coverage and detection because of the repetitive searching of each sensor and among sensors.
A self-organizing algorithm named anti-flocking which mimics solitary animal's social behaviour was first proposed. It works based on quite simple rules for achieving purposeful coordinated group action without explicit global control. Experimental results demonstrate its attractive target detection efficiency in term of both detection rate and detection time while providing desirable features such as scalability, robustness and adaptivity.
From the simulation results, the detection rate of the anti-flocking model increases by 36.5% and average detection time decreases by 46.2% comparing with the fully random motion model. The real platform results also reflect the superior performance improvement.
|
49 |
Model Synchronization for Software EvolutionIvkovic, Igor 26 August 2011 (has links)
Software evolution refers to continuous change that a software system endures from inception to retirement. Each change must be efficiently and tractably propagated across models representing the system at different levels of abstraction. Model synchronization activities needed to support the systematic specification and analysis of evolution activities are still not adequately identified and formally defined.
In our research, we first introduce a formal notation for the representation of domain models and model instances to form the theoretical basis for the proposed model synchronization framework. Besides conforming to a generic MOF metamodel, we consider that each software model also relates to an application domain context (e.g., operating systems,
web services). Therefore, we are addressing the problems of model synchronization by focusing on domain-specific contexts.
Secondly, we identify and formally define model dependencies that are needed to trace and propagate changes across system models at different levels of abstraction, such as from design to source code. The approach for extraction of these dependencies is based on Formal Concept Analysis (FCA) algorithms. We further model identified dependencies
using Unified Modeling Language (UML) profiles and constraints, and utilize the extracted dependency relations in the context of coarse-grained model synchronization.
Thirdly, we introduce modeling semantics that allow for more complex profile-based dependencies using Triple Graph Grammar (TGG) rules with corresponding Object Constraint Language (OCL) constraints. The TGG semantics provide for fine-grained model synchronization, and enable compliance with the Query/View/Transformation (QVT) standards.
The introduced framework is assessed on a large, industrial case study of the IBM Commerce system. The dependency extraction framework is applied to repositories of business process models and related source code. The extracted dependencies were evaluated by IBM developers, and the corresponding precision and recall values calculated with results
that match the scope and goals of the research. The grammar-based model synchronization and dependency modelling using profiles has also been applied to the IBM Commerce system, and evaluated by the developers and architects involved in development of the system. The results of this experiment have been found to be valuable by stakeholders, and a patent
codifying the results has been filed by the IBM organization and has been granted. Finally, the results of this experiment have been formalized as TGG rules, and used in the context of fine-grained model synchronization.
|
50 |
Evolving Software Systems for Self-AdaptationAmoui Kalareh, Mehdi 23 April 2012 (has links)
There is a strong synergy between the concepts of evolution and adaptation in software engineering: software adaptation refers to both the current software being adapted and to the evolution process that leads to the new adapted software. Evolution changes for the purpose of adaptation are usually made at development or compile time, and are meant to handle predictable situations in the form of software change requests. On the other hand, software may also change and adapt itself based on the changes in its environment. Such adaptive changes are usually dynamic, and are suitable for dealing with unpredictable or temporary changes in the software's operating environment.
A promising solution for software adaptation is to develop self-adaptive software systems that can manage changes dynamically at runtime in a rapid and reliable way. One of the main advantages of self-adaptive software is its ability to manage the complexity that stems from highly dynamic and nondeterministic operating environments. If a self-adaptive software system has been engineered and used properly, it can greatly improve the cost-effectiveness of software change through its lifespan. However, in practice, many of the existing approaches towards self-adaptive software are rather expensive and may increase the overall system complexity, as well as subsequent future maintenance costs. This means that in many cases, self-adaptive software is not a good solution, because its development and maintenance costs are not paid off. The situation is even worse in the case of making current (legacy) systems adaptive.
There are several factors that have an impact on the cost-effectiveness and usability of self-adaptive software; however the main objective of this thesis is to make a software system adaptive in a cost-effective way, while keeping the target adaptive software generic, usable, and evolvable, so as to support future changes. In order to effectively engineer and use self-adaptive software systems, in this thesis we propose a new conceptual model for identifying and specifying problem spaces in the context of self-adaptive software systems. Based on the foundations of this conceptual model, we propose a model-centric approach for engineering self-adaptive software by designing a generic adaptation framework and a supporting evolution process. This approach is particularly tailored to facilitate and simplify the process of evolving and adapting current (legacy) software towards runtime adaptivity. The conducted case studies reveal the applicability and effectiveness of this approach in bringing self-adaptive behaviour into non-adaptive applications that essentially demand adaptive behaviour to sustain.
|
Page generated in 0.1004 seconds