1 |
Exploring the Accuracy of Existing Effort Estimation Methods for Distributed Software Projects-Two Case Studies / Exploring adekvata befintliga Ansträngningszoner beräkningsmetoder för distribuerad programvara Projekt-två fallstudierKhan, Abid Ali, Muhammad, Zaka Ullah January 2009 (has links)
The term “Globalization” brought many challenges with itself in the field of software development. The challenge of accurate effort estimation in GSD is one among them. When talking about effort estimation, the discussion starts for effort estimation methods. There are a number of effort estimation methods available. Existing effort estimation methods used for co-located projects are might not enough capable to estimate effort for distributed projects. This is why; ratio of failure of GSD projects is high. It is important to calibrate existing methods or invent new with respect to GSD environment. This thesis is an attempt to explore the accuracy of effort estimation methods for distributed projects. For this purpose, the authors selected three estimation approaches: COCOMO II, SLIM and ISBSG. COCOMO II and SLIM are two well known effort estimation methods, whereas, ISBSG is used to check the trend of a project depending upon its (ISBSG’s) repository. The selection of the methods and approaches was based on their popularity and advantages over other methods/approaches. Two finished projects from two different organizations were selected and analyzed as case studies. The results indicated that effort estimation with COCOMO II deviated 15.97 % for project A and 9.71% for project B. Whereas, SLIM showed the deviation of 4.17% for project A and 10.86 % for project B. Thus, the authors concluded that both methods underestimated the effort in the studied cases. Furthermore, factors that might cause deviation are discussed and several solutions are recommended. Particularly, the authors state that existing effort estimation methods can be used for GSD projects but they need calibration by considering GSD factors to achieve accurate results. This calibration will help in process improvement of effort estimation.
|
2 |
The value and validity of software effort estimation models built from a multiple organization data setDeng, Kefu January 2008 (has links)
The objective of this research is to empirically assess the value and validity of a multi-organization data set in the building of prediction models for several ‘local’ software organizations; that is, smaller organizations that might have a few project records but that are interested in improving their ability to accurately predict software project effort. Evidence to date in the research literature is mixed, due not to problems with the underlying research ideas but with limitations in the analytical processes employed: • the majority of previous studies have used only a single organization as the ‘local’ sample, introducing the potential for bias • the degree to which the conclusions of these studies might apply more generally is unable to be determined because of a lack of transparency in the data analysis processes used. It is the aim of this research to provide a more robust and visible test of the utility of the largest multi-organization data set currently available – that from the ISBSG – in terms of enabling smaller-scale organizations to build relevant and accurate models for project-level effort prediction. Stepwise regression is employed to enable the construction of ‘local’, ‘global’ and ‘refined global’ models of effort that are then validated against actual project data from eight organizations. The results indicate that local data, that is, data collected for a single organization, is almost always more effective as a basis for the construction of a predictive model than data sourced from a global repository. That said, the accuracy of the models produced from the global data set, while worse than that achieved with local data, may be sufficiently accurate in the absence of reliable local data – an issue that could be investigated in future research. The study concludes with recommendations for both software engineering practice – in setting out a more dynamic scenario for the management of software development – and research – in terms of implications for the collection and analysis of software engineering data.
|
3 |
The value and validity of software effort estimation models built from a multiple organization data setDeng, Kefu January 2008 (has links)
The objective of this research is to empirically assess the value and validity of a multi-organization data set in the building of prediction models for several ‘local’ software organizations; that is, smaller organizations that might have a few project records but that are interested in improving their ability to accurately predict software project effort. Evidence to date in the research literature is mixed, due not to problems with the underlying research ideas but with limitations in the analytical processes employed: • the majority of previous studies have used only a single organization as the ‘local’ sample, introducing the potential for bias • the degree to which the conclusions of these studies might apply more generally is unable to be determined because of a lack of transparency in the data analysis processes used. It is the aim of this research to provide a more robust and visible test of the utility of the largest multi-organization data set currently available – that from the ISBSG – in terms of enabling smaller-scale organizations to build relevant and accurate models for project-level effort prediction. Stepwise regression is employed to enable the construction of ‘local’, ‘global’ and ‘refined global’ models of effort that are then validated against actual project data from eight organizations. The results indicate that local data, that is, data collected for a single organization, is almost always more effective as a basis for the construction of a predictive model than data sourced from a global repository. That said, the accuracy of the models produced from the global data set, while worse than that achieved with local data, may be sufficiently accurate in the absence of reliable local data – an issue that could be investigated in future research. The study concludes with recommendations for both software engineering practice – in setting out a more dynamic scenario for the management of software development – and research – in terms of implications for the collection and analysis of software engineering data.
|
4 |
Investigating the Nature of Relationship between Software Size and Development EffortBajwa, Sohaib-Shahid January 2008 (has links)
Software effort estimation still remains a challenging and debatable research area. Most of the software effort estimation models take software size as the base input. Among the others, Constructive Cost Model (COCOMO II) is a widely known effort estimation model. It uses Source Lines of Code (SLOC) as the software size to estimate effort. However, many problems arise while using SLOC as a size measure due to its late availability in the software life cycle. Therefore, a lot of research has been going on to identify the nature of relationship between software functional size and effort since functional size can be measured very early when the functional user requirements are available. There are many other project related factors that were found to be affecting the effort estimation based on software size. Application Type, Programming Language, Development Type are some of them. This thesis aims to investigate the nature of relationship between software size and development effort. It explains known effort estimation models and gives an understanding about the Function Point and Functional Size Measurement (FSM) method. Factors, affecting relationship between software size and development effort, are also identified. In the end, an effort estimation model is developed after statistical analyses. We present the results of an empirical study which we conducted to investigate the significance of different project related factors on the relationship between functional size and effort. We used the projects data in the International Software Benchmarking Standards Group (ISBSG) dataset. We selected the projects which were measured by utilizing the Common Software Measurement International Consortium (COSMIC) Function Points. For statistical analyses, we performed step wise Analysis of Variance (ANOVA) and Analysis of Co-Variance (ANCOVA) techniques to build the multi variable models. We also performed Multiple Regression Analysis to formalize the relation. / Software effort estimation still remains a challenging and debatable research area. Most of the software effort estimation models take software size as the base input. Among the others, Constructive Cost Model (COCOMO II) is a widely known effort estimation model. It uses Source Lines of Code (SLOC) as the software size to estimate effort. However, many problems arise while using SLOC as a size measure due to its late availability in the software life cycle. Therefore, a lot of research has been going on to identify the nature of relationship between software functional size and effort since functional size can be measured very early when the functional user requirements are available. There are many other project related factors that were found to be affecting the effort estimation based on software size. Application Type, Programming Language, Development Type are some of them. This thesis aims to investigate the nature of relationship between software size and development effort. It explains known effort estimation models and gives an understanding about the Function Point and Functional Size Measurement (FSM) method. Factors, affecting relationship between software size and development effort, are also identified. In the end, an effort estimation model is developed after statistical analyses. We present the results of an empirical study which we conducted to investigate the significance of different project related factors on the relationship between functional size and effort. We used the projects data in the International Software Benchmarking Standards Group (ISBSG) dataset. We selected the projects which were measured by utilizing the Common Software Measurement International Consortium (COSMIC) Function Points. For statistical analyses, we performed step wise Analysis of Variance (ANOVA) and Analysis of Co-Variance (ANCOVA) techniques to build the multi variable models. We also performed Multiple Regression Analysis to formalize the relation. / +46-(0)-739763245
|
5 |
The Evaluation of Well-known Effort Estimation Models based on Predictive Accuracy IndicatorsKhan, Khalid January 2010 (has links)
Accurate and reliable effort estimation is still one of the most challenging processes in software engineering. There have been numbers of attempts to develop cost estimation models. However, the evaluation of model accuracy and reliability of those models have gained interest in the last decade. A model can be finely tuned according to specific data, but the issue remains there is the selection of the most appropriate model. A model predictive accuracy is determined by the difference of the various accuracy measures. The one with minimum relative error is considered to be the best fit. The model predictive accuracy is needed to be statistically significant in order to be the best fit. This practice evolved into model evaluation. Models predictive accuracy indicators need to be statistically tested before taking a decision to use a model for estimation. The aim of this thesis is to statistically evaluate well known effort estimation models according to their predictive accuracy indicators using two new approaches; bootstrap confidence intervals and permutation tests. In this thesis, the significance of the difference between various accuracy indicators were empirically tested on the projects obtained from the International Software Benchmarking Standard Group (ISBSG) data set. We selected projects of Un-Adjusted Function Points (UFP) of quality A. Then, the techniques; Analysis Of Variance ANOVA and regression to form Least Square (LS) set and Estimation by Analogy (EbA) set were used. Step wise ANOVA was used to form parametric model. K-NN algorithm was employed in order to obtain analogue projects for effort estimation use in EbA. It was found that the estimation reliability increased with the pre-processing of the data statistically, moreover the significance of the accuracy indicators were not only tested statistically but also with the help of more complex inferential statistical methods. The decision of selecting non-parametric methodology (EbA) for generating project estimates in not by chance but statistically proved.
|
Page generated in 0.0182 seconds