Spelling suggestions: "subject:"encounting"" "subject:"boxcounting""
1 |
Has Outsourcing/Contracting Out Saved Money and/or Improved Service Quality? A Vote-Counting AnalysisBourbeau, John Allen 02 December 2004 (has links)
Most privatization literature, of which outsourcing/contracting out is a sub-set, discusses: 1) localized anecdotes of how organizations privatized; 2) privatization's history; 3) its types; and/or 4) its pros and cons. What is missing is a methodologically defensible, comprehensive, macro-view of whether or not outsourcing has saved money and/or improved service quality. Using the vote-counting analytical procedure, this dissertation provides one comprehensive view by analyzing and combining the findings of 40 sources covering 222 outsourced services at all levels of US government.
The author found that contracting out resulted in cost savings 79% of the time, but improved service quality only 48% of the time. The author also found that outsourcing savings and improved service quality declined as the level of government got smaller. This phenomenon could be an artifact of the federal requirement that a private contractor must show savings of at least 10% or $10 million before any outsourcing occurs. The lower levels of improved service can generally be explained by surveys which show that government managers treat service quality improvement as an afterthought.
The findings of this study are consistent with other authors (e.g., Hodge, Savas, Dehoog, Moore) and led the author to the following insights: 1) Outsourcing continues to grow. 2) The amount of evidence regarding outsourcing effectiveness is minimal, confusing, and highly subjective. 3) Outsourcing saves money, but at the expense of quality or at least without improving it. 4) Contracting out can be a solution, but is not the only solution to government funding and service quality shortfalls. 5) Successful outsourcing has been implemented in certain ways. 6) Outsourcing does not spell the end of public administration. / Ph. D.
|
2 |
Analise secundaria de estudos experimentais em engenharia de software / Secundary analysis of experimental software engineeringCruzes, Daniela Soares 27 August 2007 (has links)
Orientadores: Mario Jino, Manoel Gomes de Mendonça Neto, Victor Robert basili / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-09T03:08:37Z (GMT). No. of bitstreams: 1
Cruzes_DanielaSoares_D.pdf: 5878913 bytes, checksum: 3daddec5bb0c08c955c288b74419bccc (MD5)
Previous issue date: 2007 / Resumo: Enquanto é claro que existem muitas fontes de variação de um contexto de desenvolvimento de software para outro, não é claro, a priori, quais variáveis específicas influenciarão a eficácia de um processo, de uma técnica ou de um método em um determinado contexto. Por esta razão, o conhecimento sobre a engenharia de software deve ser construído a partir de muitos estudos, executados tanto em contextos similares como em contextos diferentes entre si. Trabalhos precedentes discutiram como projetar estudos relacionados documentando tão precisamente quanto possível os valores de variáveis do contexto para assim poder comparálos com os valores observados em novos estudos. Esta abordagem é importante, porém argumentamos neste trabalho que uma abordagem oportunística também é prática. A abordagem de análise secundária de estudos discutida neste trabalho (SecESE) visa combinar resultados de múltiplos estudos individuais realizados independentemente, permitindo a expansão do conhecimento experimental em engenharia de software. Usamos uma abordagem baseada na codificação da informação extraída dos artigos e dos dados experimentais em uma base estruturada. Esta base pode então ser minerada para extrair novos conhecimentos de maneira simples e flexível / Abstract: While it is clear that there are many sources of variation from one software development context to another, it is not clear a priori, what specific variables will influence the effectiveness of a process, technique, or method in a given context. For this reason, we argue that knowledge about software engineering must be built from many studies, in which related studies are run within similar contexts as well as very different ones. Previous works have discussed how to design related studies so as to document as precisely as possible the values of context variables and be able to compare with those observed in new studies. While such a planned approach is important, we argue that an opportunistic approach is also practical. This approach would combine results from multiple individual studies after the fact, enabling the expansion of empirical software engineering knowledge from large evidence bases. In this dissertation, we describe a process to build empirical knowledge about software engineering. It uses an approach based on encoding the information extracted from papers and experimental data into a structured base. This base can then be mined to extract new knowledge from it in a simple and flexible way / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
|
3 |
The impact of design complexity on software cost and qualityDuc, Anh Nguyen January 2010 (has links)
Context: Early prediction of software cost and quality is important for better software planning and controlling. In early development phases, design complexity metrics are considered as useful indicators of software testing effort and some quality attributes. Although many studies investigate the relationship between design complexity and cost and quality, it is unclear what we have learned from these studies, because no systematic synthesis exists to date. Aim: The research presented in this thesis is intended to contribute for the body of knowledge about cost and quality prediction. A major part of this thesis presents the systematic review that provides detail discussion about state of the art of research on relationship between software design metric and cost and software quality. Method: This thesis starts with a literature review to identify the important complexity dimensions and potential predictors for predicting external software quality attributes are identified. Second, we aggregated Spearman correlation coefficients and estimated odds ratios from univariate logistic regression models from 59 different data sets from 57 primary studies by a tailored meta-analysis approach. At last, it is an attempt to evaluate and explain for disagreement among selected studies. Result: There are not enough studies for quantitatively summarizing relationship between design complexity and development cost. Fault proneness and maintainability is the main focused characteristics that consume 75% total number of studies. Within fault proneness and maintainability studies, coupling and scale are two complexity dimensions that are most frequently used. Vote counting shows evidence about positive impact of some design metrics on these two quality attributes. Meta analysis shows the aggregated effect size of Line of code (LOC) is stronger than those of WMC, RFC and CBO. The aggregated effect sizes of LCOM, DIT and NOC are at trivial to small level. In subgroup analysis, defect collections phase explains more than 50% of observed variation in five out of seven investigated metrics. Conclusions: Coupling and scale metrics are stronger correlated to fault proneness than cohesion and inheritance metrics. No design metrics are stronger single predictors than LOC. We found that there is a strong disagreement between the individual studies, and that defect collection phase is able to partially explain the differences between studies.
|
Page generated in 0.0714 seconds