Return to search

Empirical Studies of Performance Bugs and Performance Analysis Approaches for Software Systems

Developing high quality software is of eminent importance to keep the existing customers satisfied and to remain competitive. One of the most important software quality characteristics is performance, which defines how fast and/or efficiently a software can perform its operation.

While several studies have shown that field problems are often due to performance issues instead of feature bugs, prior research typically treats all bugs as similar when studying various aspects of software quality (e.g., predicting the time to fix a bug) or focused on other types of bug (e.g., security bugs). There is little work that studies performance bugs.

In this thesis, we perform an empirical study to quantitatively and qualitatively examine performance bugs in the Mozilla Firefox and Google Chrome web browser projects in order to find out if performance bugs are really different from other bugs in practice and to understand the rationale behind those differences.

In our quantitative study, we find that performance bugs of the Firefox project take longer time to fix, are fixed by more experienced developers, and require changes to more lines of code. We also study performance bugs relative to security bugs, since security bugs have been extensively studied separately in the past. We find that security bugs are re-opened and tossed more often, are fixed and triaged faster, are fixed by more experienced developers, and are assigned more number of developers in the Firefox project. Google Chrome project also shows different quantitative characteristics between performance and non-performance bugs and from the Firefox project.

Based on our quantitative results, we look at that data from a qualitative point of view. As one of our most interesting observation, we find that end-users are often frustrated with performance problems and often threaten to switch to competing software products.

To better understand, the rationale for some users being very frustrated (even threatening to switch product) even though most systems are well tested, we performed an additional study. In this final study, we explore a global perspective vs a user centric perspective of analyzing performance data. We find that a user-centric perspective might lead to a small number of users with considerably poor performance while the global perspective might show good or same performance across releases.

The results of our studies show that performance bugs are different and should be studied separately in large scale software systems to improve the quality assurance processes related to software performance. / Thesis (Master, Computing) -- Queen's University, 2012-04-30 01:28:22.623

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OKQ.1974/7162
Date30 April 2012
CreatorsZAMAN, SHAHED
ContributorsQueen's University (Kingston, Ont.). Theses (Queen's University (Kingston, Ont.))
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish, English
Detected LanguageEnglish
TypeThesis
RightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.
RelationCanadian theses

Page generated in 0.0021 seconds