• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 4
  • 1
  • Tagged with
  • 37
  • 37
  • 37
  • 35
  • 12
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Statistical causal analysis for fault localization

Baah, George Kofi 08 August 2012 (has links)
The ubiquitous nature of software demands that software is released without faults. However, software developers inadvertently introduce faults into software during development. To remove the faults in software, one of the tasks developers perform is debugging. However, debugging is a difficult, tedious, and time-consuming process. Several semi-automated techniques have been developed to reduce the burden on the developer during debugging. These techniques consist of experimental, statistical, and program-structure based techniques. Most of the debugging techniques address the part of the debugging process that relates to finding the location of the fault, which is referred to as fault localization. The current fault-localization techniques have several limitations. Some of the limitations of the techniques include (1) problems with program semantics, (2) the requirement for automated oracles, which in practice are difficult if not impossible to develop, and (3) the lack of theoretical basis for addressing the fault-localization problem. The thesis of this dissertation is that statistical causal analysis combined with program analysis is a feasible and effective approach to finding the causes of software failures. The overall goal of this research is to significantly extend the state of the art in fault localization. To extend the state-of-the-art, a novel probabilistic model that combines program-analysis information with statistical information in a principled manner is developed. The model known as the probabilistic program dependence graph (PPDG) is applied to the fault-localization problem. The insights gained from applying the PPDG to fault localization fuels the development of a novel theoretical framework for fault localization based on established causal inference methodology. The development of the framework enables current statistical fault-localization metrics to be analyzed from a causal perspective. The analysis of the metrics show that the metrics are related to each other thereby allowing the unification of the metrics. Also, the analysis of metrics from a causal perspective reveal that the current statistical techniques do not find the causes of program failures instead the techniques find the program elements most associated with failures. However, the fault-localization problem is a causal problem and statistical association does not imply causation. Several empirical studies are conducted on several software subjects and the results (1) confirm our analytical results, (2) demonstrate the efficacy of our causal technique for fault localization. The results demonstrate the research in this dissertation significantly improves on the state-of-the-art in fault localization.
22

Analysis of multiple software releases of AFATDS using design metrics

Bhargava, Manjari January 1991 (has links)
The development of high quality software the first time, greatly depends upon the ability to judge the potential quality of the software early in the life cycle. The Software Engineering Research Center design metrics research team at Ball State University has developed a metrics approach for analyzing software designs. Given a design, these metrics highlight stress points and determine overall design quality.The purpose of this study is to analyze multiple software releases of the Advanced Field Artillery Tactical Data System (AFATDS) using design metrics. The focus is on examining the transformations of design metrics at each of three releases of AFATDS to determine the relationship of design metrics to the complexity and quality of a maturing system. The software selected as a test case for this research is the Human Interface code from Concept Evaluation Phase releases 2, 3, and 4 of AFATDS. To automate the metric collection process, a metric tool called the Design Metric Analyzer was developed.Further analysis of design metrics data indicated that the standard deviation and mean for the metric was higher for release 2, relatively lower for release 3, and again higher for release 4. Interpreting this means that there was a decrease in complexity and an improvement in the quality of the software from release 2 to release 3 and an increase in complexity in release 4. Dialog with project personnel regarding design metrics confirmed most of these observations. / Department of Computer Science
23

Continuing professional education for software quality assurance / CPE for SQA

Hammons, Rebecca L. January 2009 (has links)
This case study examined the self-directed and team-based learning activities of a software quality assurance organization in central Indiana. The skills required to assure a high level of software quality evolve rapidly and software quality professionals must embrace ongoing technology and process changes. The thirty focus group participants performed a variety of quality assurance tasks including configuration management, research, automated test development, test planning and execution, and team leadership. The case study was based on semi-structured interviews of four focus groups of software quality professionals, and explored the learning styles, preferences, and activities deployed to learn new technologies and solve complex software problems. Software products are becoming increasingly pervasive in our culture. The study of continuing education for the software quality profession is important due to our increased reliance on this profession to meet customer expectations for high-quality software products. The proliferation of software products in our culture has also increased the demand for software quality professionals. Those professionals who have access to continuing professional education to improve and maintain skills have the opportunity to meet customer expectations. There is no mandated certification or licensing for this profession therefore professionals are left to chart their own course of learning. This study sought to understand how these software quality professionals meet their continuing professional educational needs. As well, the study identified key resources required to support such continuing professional education both within the workplace and off the job. Future study of the role of critical self-reflection in establishing learning objectives could enhance our understanding of how software quality professionals identify and plan their learning activities. Further investigation of the value of computer programming and logic knowledge to the software quality professional would benefit our understanding of baseline skill requirements for the various roles performed in the profession. There are also opportunities to engage in future action research projects on co-location of teams, mentoring, and job rotation strategies, as employees were found to learn effectively from peers. / Department of Educational Studies
24

Monitoração de requisitos de qualidade baseada na arquitetura de software / Quality requirements monitoring based on software architecture

Silva, André Almeida 19 February 2015 (has links)
Computer systems gain more space day by day in the lives of individuals, causing the demand for computerized solutions more and more sophisticated and accurate, become increasing. Thus, there is a requirement of effective quality assurance for software produced, checked by monitoring of quality attributes. However, the main current monitoring techniques are turning mainly to service-based systems, setting aside a large number of software. In this context, this work aims to discuss about the monitoring of quality attributes referenced by ISO/IEC 9126 standard. Decision trees will be set relating to the architectural elements monitoring issues, and also a tool that uses the concepts of Aspect-Oriented Programming to automate the process of monitoring the reliability and efficiency requirements by generating aspects-monitors intended for logging and recording exceptions given target system. Still be observed the case study disposal structured by the Goal/Question/Metric (GQM) paradigm, conducted with the purpose of analyze the feasibility of the developed solution which is a simplified way for architects and software developers to define monitors to measure quality attributes in their systems. / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Os sistemas computacionais ganham dia a dia mais espaço na vida dos indivíduos, fazendo com que a demanda por soluções computadorizadas, cada vez mais sofisticadas e precisas, seja crescente. Assim, há a exigência de efetivas garantias de qualidade aos softwares produzidos, conferidas pela monitoração dos atributos de qualidade. Contudo, as principais técnicas de monitoração atuais voltam-se, sobretudo, aos sistemas baseados em serviços, deixando de lado uma grande parcela de softwares. Neste contexto, o presente trabalho possui como objetivo discutir acerca da monitoração dos atributos de qualidade referenciados pela norma ISO/IEC 9126. Serão definidas árvores de decisão, que relacionarão os elementos arquiteturais às questões de monitoração, e ainda uma ferramenta que utilizará conceitos da Programação Orientada a Aspectos para automatizar o processo de monitoração dos requisitos confiabilidade e eficiência, através da geração de aspectos-monitores destinados ao logging e registro de exceções de determinado sistema-alvo. Ainda será observada a disposição de estudo de caso estruturado pelo paradigma Goal/Question/Metric (GQM), realizado com a finalidade de analisar a viabilidade da solução desenvolvida que representa uma maneira simplificada para que arquitetos e desenvolvedores de softwares definam monitores para aferir atributos de qualidade em seus sistemas.
25

The implications of deviating from software testing processes : a case study of a software development company in Cape Town, South Africa

Roems, Raphael January 2017 (has links)
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2017. / Ensuring that predetermined quality standards are met is an issue which software development companies, and the software development industry at large, is having issues in attaining. The software testing process is an important process within the larger software development process, and is done to ensure that software functionality meets user requirements and software defects are detected and fixed prior to users receiving the developed software. Software testing processes have progressed to the point where there are formal processes, dedicated software testing resources and defect management software in use at software development organisations. The research determined implications that the case study software development organisation could face when deviating from software testing processes, with a focus on function performed by the software tester role. The analytical dimensions of duality of structure framework, based on Structuration Theory, was used as a lens to understand and interpret the socio-technical processes associated with software development processes at the case study organisation. Results include the identification of software testing processes, resources and tools, together with the formal software development processes and methodologies being used. Critical e-commerce website functionality and software development resource costs were identified. Tangible and intangible costs which arise due to software defects were also identified. Recommendations include the prioritisation of critical functionality for test execution for the organisation’s e-commerce website platform. The necessary risk management should also be undertaken in scenarios with time constraints on software testing, which balances risk with quality, features, budget and schedule. Numerous process improvements were recommended for the organisation, to assist in preventing deviations from prescribed testing processes. A guideline was developed as a research contribution to illustrate the relationships of the specific research areas and the impact on software project delivery.
26

A comparative study of three ICT network programs using usability testing

Van der Linde, P.L. January 2013 (has links)
Thesis (M. Tech. (Information Technology)) -- Central University of technology, Free State, 2013 / This study compared the usability of three Information and Communication Technology (ICT) network programs in a learning environment. The researcher wanted to establish which program was most adequate from a usability perspective among second-year Information Technology (IT) students at the Central University of Technology (CUT), Free State. The Software Usability Measurement Inventory (SUMI) testing technique can measure software quality from a user perspective. The technique is supported by an extensive reference database to measure a software product’s quality in use and is embedded in an effective analysis and reporting tool called SUMI scorer (SUMISCO). SUMI was applied in a controlled laboratory environment where second-year IT students of the CUT, utilized SUMI as part of their networking subject, System Software 1 (SPG1), to evaluate each of the three ICT network programs. The results, strengths and weaknesses, as well as usability improvements, as identified by SUMISCO, are discussed to determine the best ICT network program from a usability perspective according to SPG1 students.
27

Compliance Issues In Cloud Computing Systems

Unknown Date (has links)
Appealing features of cloud services such as elasticity, scalability, universal access, low entry cost, and flexible billing motivate consumers to migrate their core businesses into the cloud. However, there are challenges about security, privacy, and compliance. Building compliant systems is difficult because of the complex nature of regulations and cloud systems. In addition, the lack of complete, precise, vendor neutral, and platform independent software architectures makes compliance even harder. We have attempted to make regulations clearer and more precise with patterns and reference architectures (RAs). We have analyzed regulation policies, identified overlaps, and abstracted them as patterns to build compliant RAs. RAs should be complete, precise, abstract, vendor neutral, platform independent, and with no implementation details; however, their levels of detail and abstraction are still debatable and there is no commonly accepted definition about what an RA should contain. Existing approaches to build RAs lack structured templates and systematic procedures. In addition, most approaches do not take full advantage of patterns and best practices that promote architectural quality. We have developed a five-step approach by analyzing features from available approaches but refined and combined them in a new way. We consider an RA as a big compound pattern that can improve the quality of the concrete architectures derived from it and from which we can derive more specialized RAs for cloud systems. We have built an RA for HIPAA, a compliance RA (CRA), and a specialized compliance and security RA (CSRA) for cloud systems. These RAs take advantage of patterns and best practices that promote software quality. We evaluated the architecture by creating profiles. The proposed approach can be used to build RAs from scratch or to build new RAs by abstracting real RAs for a given context. We have also described an RA itself as a compound pattern by using a modified POSA template. Finally, we have built a concrete deployment and availability architecture derived from CSRA that can be used as a foundation to build compliance systems in the cloud. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
28

A case study of quality management of software applications for management information systems in Hong Kong.

January 1994 (has links)
by Ng Mei Po Mabel. / Thesis (M.B.A.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leave 51-52). / ABSTRACT --- p.ii / TABLE OF CONTENTS --- p.iii / Chapter / Chapter I. --- INTRODUCTION --- p.1 / Chapter II. --- PROBLEM IN FOCUS --- p.3 / Chapter III. --- SCOPE OF STUDY --- p.7 / Chapter IV. --- RESEARCH METHODOLOGY --- p.8 / Chapter V. --- ORGANISATION OF INFORMATION TECHNOLOGY SERVICES DEPARTMENT --- p.8 / Function --- p.8 / Mission --- p.8 / Organisation Structure --- p.8 / Personnel Schedule --- p.8 / Requests for Computerisation --- p.10 / Departmental IS Strategic Planning --- p.10 / Microcomputer Systems and Items --- p.10 / Mainframe Systems and Mid Range Systems --- p.10 / Chapter VI. --- SYSTEMS DEVELOPMENT LIFE CYCLE --- p.12 / Introduction --- p.12 / Detailed Description --- p.15 / What is SSADM+ in ITSD ? --- p.22 / Implementation of SSADM+ in ITSD --- p.26 / Chapter VII. --- THE ROAD TO ACHIEVE IS09001 --- p.28 / The Principal Concepts and Significance of IS09000 --- p.28 / Why is IS09000 Recommended to be Necessary for ITSD? --- p.29 / Overview of the Feasibility of Applying IS09000 in ITSD --- p.30 / Recommendations --- p.35 / Problems of Study --- p.38 / APPENDIX --- p.39 / BILIOGRAPHY --- p.51
29

Software architecture evaluation for framework-based systems.

Zhu, Liming, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
30

Proposta de um método de teste para processos de desenvolvimento de software usando o paradigma orientado a notificações

Kossoski, Clayton 19 August 2015 (has links)
CAPES / O Paradigma Orientado a Notificações (PON) é uma alternativa para o desenvolvimento de aplicações em software e propõe resolver certos problemas existentes nos paradigmas usuais de programação, nomeadamente o Paradigma Declarativo (PD) e o Paradigma Imperativo (PI). Na verdade, o PON unifica as principais vantagens do PD e do PI, ao mesmo tempo que resolve (em termos de modelo) várias de suas deficiências e inconvenientes relativas ao cálculo lógico- causal em aplicações monoprocessados a de software, completamente supostamente multiprocessados. desde O PON ambientes tem sido materializado em termos de programação e modelagem, mas ainda não possuía um método formalizado para orientar os desenvolvedores na elaboração de teste de software. Esta dissertação propõe um método de teste para projetos de software que empregam o PON no seu desenvolvimento. O método de teste de software proposto foi desenvolvido para ser aplicado nas fases de teste unitário e teste de integração. O teste unitário considera as menores entidades testáveis do PON e requer critérios de teste específicos. O teste de integração considera o funcionamento das entidades PON em conjunto para realização de casos de uso e pode ser realizado em duas etapas: (1) teste sobre as funcionalidades descritas nos requisitos e no caso de uso e (2) teste que exercitem diretamente as entidades PON que compõem o caso de uso (como Premisses, Conditions e Rules). Esse método de teste foi aplicado em um caso de estudo que envolve a modelagem e desenvolvimento de um software de combate aéreo e os resultados desta pesquisa mostram que o método proposto possui grande importância no teste de programas PON. / The Notification Oriented Paradigm (NOP) is an alternative to the development of software applications and proposes to solve certain problems in the usual programming paradigms, including the Declarative Paradigm (DP) and Imperative Paradigm (IP). Indeed, the NOP unifies the main advantages of DP and IP while solving (in terms of model) several of its deficiencies and inconveniences related to logical-causal calculation, apparently from both mono and multiprocessor environments. The NOP has been materialized in terms of programming and modeling, but still did not have a formalized method to guide developers in designing and software testing activity. This dissertation proposes a test method for software projects that use the NOP in its development. The proposed software testing method was developed for use in the phases of unit testing and integration testing. The unit testing considers the smallest testable entities of the NOP and requires specific techniques for generating test cases. The integration testing considers the operation of the PON entities together to carry out use cases and can be accomplished in two steps: (1) test on the features described in the requirements and use case and (2) test that directly exercise the NOP entities that make up the use case (as Premisses, Conditions and Rules). The test method was applied in a case study involving the modeling and development of a simple air combat and the results of this research show that the proposed method has great importance in testing NOP programs in both unit and integration testing.

Page generated in 0.0599 seconds