1 |
Definition and validation of requirements management measuresLoconsole, Annabella January 2007 (has links)
<p>The quality of software systems depends on early activities in the software development process, of which the management of requirements is one. When requirements are not managed well, a project can fail or become more costly than intended, and the quality of the software developed can decrease. Among the requirements management practices, it is particularly important to quantify and predict requirements volatility, i.e., how much the requirements are likely to change over time. Software measures can help in quantifying and predicting requirements attributes like volatility. However, few measures have yet been defined, due to the fact that the early phases are hard to formalise. Furthermore, very few requirements measures have been validated, which would be needed in order to demonstrate that they are useful. The approach to requirements management in this thesis is quantitative, i.e. to monitor the requirements management activities and requirements volatility through software measurement. In this thesis, a set of 45 requirements management measures is presented. The measures were defined using the goal question metrics framework for the two predefined goals of the requirements management key process area of the capability maturity model for software. A subset of these measures was validated theoretically and empirically in four case studies. Furthermore, an analysis of validated measures in the literature was performed, showing that there is a lack of validated process, project, and requirements measures in software engineering. The studies presented in this thesis show that size measures are good estimators of requirements volatility. The important result is that size is relevant: increasing the size of a requirements document implies that the number of changes to requirements increases as well. Furthermore, subjective estimations of volatility were found to be inaccurate assessors of requirements volatility. These results suggest that practitioners should complement the subjective estimations for assessing volatility with the objective ones. Requirements engineers and project managers will benefit from the research presented in this thesis because the measures defined, proved to be predictors of volatility, can help in understanding how much requirements will change. By deploying the measures, the practitioners would be prepared for possible changes in the schedule and cost of a project, giving them the possibility of creating alternative plans, new cost estimates, and new software development schedules.</p>
|
2 |
Definition and validation of requirements management measuresLoconsole, Annabella January 2007 (has links)
The quality of software systems depends on early activities in the software development process, of which the management of requirements is one. When requirements are not managed well, a project can fail or become more costly than intended, and the quality of the software developed can decrease. Among the requirements management practices, it is particularly important to quantify and predict requirements volatility, i.e., how much the requirements are likely to change over time. Software measures can help in quantifying and predicting requirements attributes like volatility. However, few measures have yet been defined, due to the fact that the early phases are hard to formalise. Furthermore, very few requirements measures have been validated, which would be needed in order to demonstrate that they are useful. The approach to requirements management in this thesis is quantitative, i.e. to monitor the requirements management activities and requirements volatility through software measurement. In this thesis, a set of 45 requirements management measures is presented. The measures were defined using the goal question metrics framework for the two predefined goals of the requirements management key process area of the capability maturity model for software. A subset of these measures was validated theoretically and empirically in four case studies. Furthermore, an analysis of validated measures in the literature was performed, showing that there is a lack of validated process, project, and requirements measures in software engineering. The studies presented in this thesis show that size measures are good estimators of requirements volatility. The important result is that size is relevant: increasing the size of a requirements document implies that the number of changes to requirements increases as well. Furthermore, subjective estimations of volatility were found to be inaccurate assessors of requirements volatility. These results suggest that practitioners should complement the subjective estimations for assessing volatility with the objective ones. Requirements engineers and project managers will benefit from the research presented in this thesis because the measures defined, proved to be predictors of volatility, can help in understanding how much requirements will change. By deploying the measures, the practitioners would be prepared for possible changes in the schedule and cost of a project, giving them the possibility of creating alternative plans, new cost estimates, and new software development schedules.
|
3 |
A Fault-Based Model of Fault Localization TechniquesHays, Mark A 01 January 2014 (has links)
Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important.
In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault's location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future.
While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers' work. The results of the survey indicate that software testing researchers are underreporting the significance of their work.
|
4 |
A Usability Inspection Method for Model-driven Web Development ProcessesFernández Martínez, Adrián 20 November 2012 (has links)
Las aplicaciones Web son consideradas actualmente un elemento esencial e indispensable en toda actividad empresarial, intercambio de información y motor de redes sociales. La usabilidad, en este tipo de aplicaciones, es reconocida como uno de los factores clave más importantes, puesto que la facilidad o dificultad que los usuarios experimentan con estas aplicaciones determinan en gran medida su éxito o fracaso. Sin embargo, existen varias limitaciones en las propuestas actuales de evaluación de usabilidad Web, tales como: el concepto de usabilidad sólo se soporta parcialmente, las evaluaciones de usabilidad se realizan principalmente cuando la aplicación Web se ha desarrollado, hay una carencia de guías sobre cómo integrar adecuadamente la usabilidad en el desarrollo Web, y también existe una carencia de métodos de evaluación de la usabilidad Web que hayan sido validados empíricamente. Además, la mayoría de los procesos de desarrollo Web no aprovechan los artefactos producidos en las fases de diseño. Estos artefactos software intermedios se utilizan principalmente para guiar a los desarrolladores y para documentar la aplicación Web, pero no para realizar evaluaciones de usabilidad. Dado que la trazabilidad entre estos artefactos y la aplicación Web final no está bien definida, la realización de evaluaciones de usabilidad de estos artefactos resulta difícil. Este problema se mitiga en el desarrollo Web dirigido por modelos (DWDM), donde los artefactos intermedios (modelos) que representan diferentes perspectivas de una aplicación Web, se utilizan en todas las etapas del proceso de desarrollo, y el código fuente final se genera automáticamente a partir estos modelos. Al tener en cuenta la trazabilidad entre estos modelos, la evaluación de estos modelos permite detectar problemas de usabilidad que experimentaran los usuarios finales de la aplicación Web final, y proveer recomendaciones para corregir estos problemas de usabilidad durante fases tempranas del proceso de desarrollo Web.
Esta tesis tiene como objetivo, tratando las anteriores limitaciones detectadas, el proponer un método de inspección de usabilidad que se puede integrar en diferentes procesos de desarrollo Web dirigido por modelos. El método se compone de un modelo de usabilidad Web que descompone el concepto de usabilidad en sub-características, atributos y métricas genéricas, y un proceso de evaluación de usabilidad Web (WUEP), que proporciona directrices sobre cómo el modelo de usabilidad se puede utilizar para llevar a cabo evaluaciones específicas. Las métricas genéricas del modelo de usabilidad deben operacionalizarse con el fin de ser aplicables a los artefactos software de diferentes métodos de desarrollo Web y en diferentes niveles de abstracción, lo que permite evaluar la usabilidad en varias etapas del proceso de desarrollo Web, especialmente en las etapas tempranas. Tanto el modelo de usabilidad como el proceso de evaluación están alineados con la última norma ISO/IEC 25000 estándar para la evaluación de la calidad de productos de software (SQuaRE).
El método de inspección de usabilidad propuesto (WUEP) se ha instanciado en dos procesos de desarrollo Web dirigido por modelos diferentes (OO-H y WebML) a fin de demostrar la factibilidad de nuestra propuesta. Además, WUEP fue validado empíricamente mediante la realización de una familia de experimentos en OO-H y un experimento controlado en WebML. El objetivo de nuestros estudios empíricos fue evaluar la efectividad, la eficiencia, facilidad de uso percibida y la satisfacción percibida de los participantes; cuando utilizaron WUEP en comparación con un método de inspección industrial ampliamente utilizado: La Evaluación Heurística (HE). El análisis estadístico y meta-análisis de los datos obtenidos por separado de cada experimento indicaron que WUEP es más eficaz y eficiente que HE en la detección de problemas de usabilidad. Los evaluadores también percibieron más satisfacción cuando se aplicaron WUEP, y les / Fernández Martínez, A. (2012). A Usability Inspection Method for Model-driven Web Development Processes [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17845
|
5 |
Antecedents and Outcomes of Ambidexterity in the Supply Chain: Theoretical Development and Empirical ValidationScott, Nehemiah D. January 2015 (has links)
No description available.
|
6 |
Étude de l'influence de l'inertie thermique sur les performances énergétiques des bâtiments / Study of the impact of thermal mass on the energy performance of buildingsMunaretto, Fabio 07 February 2014 (has links)
Étant de plus en plus isolés, les bâtiments très performants sont très sensibles aux apports solaires transmis par les vitrages ainsi qu'aux apports internes. Dans ce contexte, l'inertie thermique peut être utile en stockant l'énergie excédentaire et en réduisant les variations de température, améliorant ainsi le confort thermique.Évaluer la performance énergétique, environnementale et le confort thermique des bâtiments nécessite des outils de simulation thermique dynamique (STD) fiables. Historiquement, les modélisateurs ont essayé de trouver un compromis approprié entre précision et efficacité. Des hypothèses simplificatrices ont alors été intégrées dans les outils STD et ont un lien étroit avec l'inertie thermique. La validité de telles hypothèses, notamment la globalisation des échanges convectifs et radiatifs GLO intérieurs, ou la distribution forfaitaire des apports solaires transmis par les vitrages nécessitent particulièrement d'être remises en questions dans le contexte des bâtiments très isolés.Ainsi, un modèle découplant les échanges convectifs et radiatifs GLO ainsi qu'un modèle de suivi de la tache solaire (modèles détaillés) ont été implémentés dans une plateforme de simulation mettant en œuvre l'analyse modale et une discrétisation par volumes finis.Une première comparaison entre les modèles détaillés et simplifiés a été réalisée sur des cas d'études du "BESTEST", intégrant aussi des résultats d'outils STD de référence au niveau international (EnergyPlus, ESP-r, TRNSYS). Un travail similaire a été réalisé sur le cas d'une maison passive instrumentée (plateforme INCAS à Chambéry) en utilisant des techniques d'analyses d'incertitudes et de sensibilité.Les résultats montrent qu'une tendance à la baisse concernant les besoins de chauffage et de refroidissement existe en ce qui concerne les modèles détaillés considérés ici. D'autre part, il semble que ces modèles détaillés ne contribuent pas à diminuer significativement les écarts entre les simulations et les mesures. / Being highly insulated, low energy buildings are very sensitive to variable solar and internal gains. In this context, thermal mass is useful by storing surplus energy and reducing temperature variation, thus improving thermal comfort.Assessing energy, environmental and thermal comfort performances requires reliable building dynamic thermal simulation (DTS) tools. Historically, model developers have tried to find a fair-trade between accuracy and simulation efficiency within a fit-to-purpose philosophy. Simplifying assumptions have therefore been integrated into DTS tools and have a close relation with thermal mass. The validity of such assumptions, for instance constant interior convective and infrared radiative superficial exchange coefficients, or fixed distribution of solar gains transmitted through windows, particularly need to be reassessed in the case of high performance buildings.A first comparison between detailed and simplified models has been performed according to the "BESTEST", integrating also international DTS reference tools (EnergyPlus, ESP-r, TRNSYS). Similar work, but using uncertainty and sensivitivity methods has been carried out using experimental measurements on a passive building (INCAS platform in Chambéry). The results show a trend for the detailed models studied here to estimate lower heating and cooling loads. Furthermore, it seems that these detailed models don't contribute to reduce significantly discrepancies between simulations and measurements.
|
Page generated in 0.1295 seconds