Spelling suggestions: "subject:"5oftware measurement"" "subject:"1software measurement""
31 |
Architectural level risk assessmentHassan, Ahmed E. January 1900 (has links)
Thesis (Ph. D.)--West Virginia University, 2004. / Title from document title page. Document formatted into pages; contains xx, 157, [12] p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 143-157).
|
32 |
Proposal to develop enhancements and extensions of formal models for risk assessment in software projects /Murrah, Michael R. January 2002 (has links) (PDF)
Thesis (M.S. in Software Engineering)--Naval Postgraduate School, September 2002. / Thesis advisor(s): Luqi, Nabendu Chaki. Includes bibliographical references (p. 115-116). Also available online.
|
33 |
Validating cohesion metrics by mining open source software data with association rulesSingh, Pariksha January 2008 (has links)
Dissertation submitted for the fulfillment of
the requirement for the degree of
Masters in Information Technology,
Department of Information Technology,
Faculty of Accounting and Informatics,
Durban University of Technology, 2008. / Competitive pressure on the software industry encourages organizations to examine
the effectiveness of their software development and evolutionary processes.
Therefore it is important that software is measured in order to improve the quality.
The question is not whether we should measure software but how it should be
measured. Software measurement has been in existence for over three decades and it
is still in the process of becoming a mature science. The many influences of new
software development technologies have led to a diverse growth in software
measurement technologies which have resulted in various definitions and validation
techniques.
An important aspect of software measurement is the measurement of the design,
which nowadays often means the measurement of object oriented design. Chidamer
and Kemerer (1994) designed a metric suite for object oriented design, which has
provided a new foundation for metrics and acts as a starting point for further
development of the software measurement science.
This study documents theoretical object oriented cohesion metrics and calculates
those metrics for classes extracted from a sample of open source software packages.
For each open source software package, the following data is recorded: software size,
age, domain, number of developers, number of bugs, support requests, feature
requests, etc. The study then tests by means of association rules which theoretical
cohesion metrics are validated hypothesis: that older software is more cohesive than
younger software, bigger packages is less cohesive than smaller packages, and the
smaller the software program the more maintainable it is.
This study attempts to validate existing theoretical object oriented cohesion metrics
by mining open source software data with association rules.
|
34 |
Performance measurement as a tool for software engineeringVan Aardt, Jan Markus 22 July 2005 (has links)
Some software development teams regard software performance measurement as a mere luxury. When it happens, it often tends to be infrequent, insufficient and subjective. Countless software projects were sent into an uncontrollable spiral of poor management and unsatisfactory results. By revisiting old ideas and policies, many companies have turned themselves around. To ensure that software engineering does the same, technologies and procedures have to be reevaluated. The fact that many companies have decided to cut costs on technology expenditure necessitates software development teams to look for alternative options for deploying high performance software systems. As many companies are moving into the electronic era and evolving to the next stage of evolution, electronic commerce, the more important it has become to apply these concepts on Internet development projects and procedures. The Internet market has shown that two software providers are aiming for worldwide domination of Internet server deployment, being Microsoft and Apache. Currently, the Apache web server is the most commonly used server on the Internet today (60%), with Microsoft's Internet Information Server (25%) in a strong second place. The need for higher throughput and better services is getting more with each passing day. It increases the pressure on these two software vendors to provide the best architecture for their clients' needs. This study intends to provide the reader with an objective view of a basic performance comparison between these two products and tries to find a correlation between the performance tests and the products' popularity standings. The tests for this study were performed on identical hardware architectures with one difference, being the operating system. By comparing the costly proprietary Microsoft solution with its cheaper open source rival, Linux, certain opinions were tested. Would a product developed by a software company that invests millions of dollars in their products perform better than this free-for-all solution, or would the selfless inputs of hundreds of developers all over the world finally payoff through the creation of the world's best Internet server? The results of these tests were evaluated through formal statistical methods, providing overall comparisons of several common uses of web servers. These results were implemented in a small field test to prove the findings in practice with some interesting outcomes in terms of supportive technologies, new rapid application development (RAD) tools and data access models. This research in itself will not change the mind of any Internet programmer. What it hopes to achieve is to demonstrate software engineers that current processes and methods of developing software are not always the right way of doing things. Furthermore, it highlights many important factors often ignored or overlooked while managing software projects. Change management, process re-engineering and risk management form crucial elements of software development projects. By not adhering to certain critical elements of software development, software projects stand the chance of not reaching their goals and could even fail completely. Performance measurement acts as a tool for software engineering, providing guidelines for technological decisions, project management and ultimately, project success. / Dissertation (MSc (Computer Science))--University of Pretoria, 2005. / Computer Science / unrestricted
|
35 |
MEASUREMENT AND ITS HISTORICAL CONTEXTGedela, Naga Venkata Praveen babu 12 November 2008 (has links)
No description available.
|
36 |
Assessment of software measurementBerry, Michael, CSE, UNSW January 2006 (has links)
Background and purpose. This thesis documents a program of five studies concerned with the assessment of software measurement. The goal of this program is to assist the software industry to improve the information support for managers, analysts and software engineers by providing evidence of where opportunities for improving measurement and analysis exist. Methods. The first study examined the assessment of software measurement frameworks using models of best practice based on performance/success factors. The software measurement frameworks of thirteen organisations were surveyed. The association between a factor and the outcome experienced with the organisations' frameworks was then evaluated. The subsequent studies were more info-centric and investigated using models of information quality to assess the support provided for software processes. For these studies, information quality models targeting specific software processes were developed using practitioner focus groups. The models were instantiated in survey instruments and the responses were analysed to identify opportunities to improve the information support provided. The final study compared the use of two different information quality models for the assessing and improving information support. Assessments of the same quantum of information were made using a targeted model and a generic model. The assessments were then evaluated by an expert panel in order to identify which information quality model was more effective for improvement purposes. Results. The study of performance factors for software measurement frameworks confirmed the association of some factors with success and quantified that association. In particular, it demonstrated the importance of evaluating contextual factors. The conclusion is that factor-based models may be appropriately used for risk analysis and for identifying constraints on measurement performance. Note, however, that a follow-up study showed that some initially successful frameworks subsequently failed. This implied an instability in the dependent variable, success, that could reduce the value of factor-based models for predicting success. The studies of targeted information quality models demonstrated the effectiveness of targeted assessments for identifying improvement opportunities and suggest that they are likely to be more effective for improvement purposes than using generic information quality models. The studies also showed the effectiveness of importance-performance analysis for prioritizing improvement opportunities.
|
37 |
Performance metrics for IT projects success within a financial organisation.Makhubele, K. M. January 2016 (has links)
M. Tech. Business Information Systems / The technological world is emerging and advancing rapidly. Organisations aspire to meet their business goals and strategic objectives. IT projects have the ability to enable organisations meet their business goals and objectives hence organisations invest a lot of time and money on implementing them. The increasing need for organisations to use technology has made organisations implement IT projects. IT projects are implemented to achieve efficiency and effectiveness that is crucial in achieving business goals and strategic objectives. Other benefits include, improved and increased customer satisfaction, improved growth and development within the teams and competitive advantage. Despite an increase in studies on the performance metrics for IT project success in organisations, IT projects still fail. Many IT Projects are considered failed when they are not delivered on time, not delivered within budget, and when the delivered solution does not meet the business requirements. This research study aimed at determining the performance metrics for IT project success within a financial organisation
|
38 |
A Tool for Measuring the Size, Structure and Complexity of SoftwareVersaw, Larry 05 1900 (has links)
The problem addressed by this thesis is the need for a software measurement tool that enforces a uniform measurement algorithm on several programming languages. The introductory chapter discusses the concern for software measurement and provides background for the specific models and metrics that are studied. A multilingual software measurement tool is then introduced, that analyzes programs written in Ada, C, Pascal, or PL/I, and quantifies over thirty different program attributes. Metrics computed by the program include McCabe's measure of cyclomatic complexity and Halstead's software science metrics. Some results and conclusions of preliminary data analysis, using the tool, are also given. The appendices contain exhaustive counting algorithms for obtaining the metrics in each language.
|
39 |
[en] AN APPROACH TO EVOLUTION IN SOFTWARE MEASUREMENT REPOSITORIES: THE CLAIRVOYANT SYSTEM / [pt] UMA ABORDAGEM PARA A EVOLUÇÃO TRANSPARENTE EM REPOSITÓRIOS DE MEDIÇÃO DE SOFTWARE: O SISTEMA CLAIRVOYANTBERNARDO ARRAES VINHOSA 02 October 2007 (has links)
[pt] O sistema Clairvoyant é um protótipo de repositório de
medições de
software cujo diferencial é oferecer a capacidade de
evolução transparente do
modelo de medição. Isso quer dizer que as alterações no
modelo de medição
podem ser feitas sem expor a estrutura interna do
armazenamento de medições
que torna possível essa evolução. A importância desse tema
advém do fato das
necessidades de informação que geram as medições em
ambientes de engenharia
de software estarem em constante mudança. Para que o
sistema Clairvoyant
comportasse a evolução transparente do modelo de medição,
foi necessário dotálo
de um meta-modelo de medições e um modelo de consultas
adequados a este
fim. Na dissertação são explicados esses modelos e a
influência deles nos macroprocessos
operacionais do repositório de medição (manutenção do
modelo de
medições, importação de dados, consulta aos dados e
exportação de dados). / [en] The Clairvoyant system is a software measurement
repository prototype
which stands out for allowing transparent evolution in its
measurement model.
This means that changes can be made to its measurement
model without revealing
the underlying storage structure which makes possible this
evolution. This is an
important concern due to the fact that the information
needs to which the
measurements respond constantly evolve. The Clairvoyant
system was designed
based on a measurement meta-model and a measurement query
model to make it
possible to transparently evolve its measurement model.
This work explains these
models and studies their influence on the repository´s
operational macro-processes
(measurement model maintenance, measurement data
importing, measurement
data querying and measurement data exporting).
|
40 |
Object-oriented software development effort prediction using design patterns from object interaction analysisAdekile, Olusegun 15 May 2009 (has links)
Software project management is arguably the most important activity in modern
software development projects. In the absence of realistic and objective management, the
software development process cannot be managed in an effective way. Software
development effort estimation is one of the most challenging and researched problems in
project management. With the advent of object-oriented development, there have been
studies to transpose some of the existing effort estimation methodologies to the new
development paradigm. However, there is not in existence a holistic approach to
estimation that allows for the refinement of an initial estimate produced in the
requirements gathering phase through to the design phase. A SysML point methodology
is proposed that is based on a common, structured and comprehensive modeling
language (OMG SysML) that factors in the models that correspond to the primary phases
of object-oriented development into producing an effort estimate. This dissertation
presents a Function Point-like approach, named Pattern Point, which was conceived to
estimate the size of object-oriented products using the design patterns found in object
interaction modeling from the late OO analysis phase. In particular, two measures are proposed (PP1 and PP2) that are theoretically validated showing that they satisfy wellknown
properties necessary for size measures.
An initial empirical validation is performed that is meant to assess the usefulness
and effectiveness of the proposed measures in predicting the development effort of
object-oriented systems. Moreover, a comparative analysis is carried out; taking into
account several other size measures. The experimental results show that the Pattern Point
measure can be effectively used during the OOA phase to predict the effort values with a
high degree of confidence. The PP2 metric yielded the best results with an aggregate
PRED (0.25) = 0.874.
|
Page generated in 0.0792 seconds