• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 76
  • 13
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 257
  • 257
  • 82
  • 81
  • 69
  • 44
  • 40
  • 39
  • 37
  • 37
  • 36
  • 32
  • 28
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Způsoby ověření kvality aplikací a systémů (metodika, nástroje) / Common ways of controlling the quality of software applications and systems (methodology & tools)

Borůvka, Zdeněk January 2008 (has links)
Integral part of all systematically managed software development or maintenance projects is emphasis on continuous quality of all project activities. Because final quality of project deliverables (new or enhanced applications, preconfigured solutions as ie. SAP) is a very big influencer of project success and therefore also important influencer of long-term relationship between customer and contractor(s), this document focuses on ways how to proactively prevent from mistakes (within the whole software development lifecycle) and on techniques helping to establish better quality control of important deliverables through systematic approach, high quality tools and suitable metrics in software testing discipline. This document gradually exposes typical project areas where it is necessary to keep control on quality of project members' outputs, perceives testing in context of typical project consequences, offers practical recommendations in testing methodology, tools as well as widely tested technologies, and explains trends and risks in testing domain. The goal of this document is not only to document a wide range of possibilities given by frequently used testing techniques or tools but also to offer a practical guidance in deployment of test discipline. This document was written by comparing author's professional experience in software quality management with knowledge gathered by reading information sources attached to this document. This document consists of concrete conclusions of this comparison.
172

A software framework to support distributed command and control applications

Duvenhage, Arno 09 August 2011 (has links)
This dissertation discusses a software application development framework. The framework supports developing software applications within the context of Joint Command and Control, which includes interoperability with network-centric systems as well as interoperability with existing legacy systems. The next generation of Command and Control systems are expected to be built on common architectures or enterprise middleware. Enterprise middleware does however not directly address integration with legacy Command and Control systems nor does it address integration with existing and future tactical systems like fighter aircraft. The software framework discussed in this dissertation enables existing legacy systems and tactical systems to interoperate with each other; it enables interoperability with the Command and Control enterprise; and it also enables simulated systems to be deployed within a real environment. The framework does all of this through a unique distributed architecture. The architecture supports both system interoperability and the simulation of systems and equipment within the context of Command and Control. This hybrid approach is the key to the success of the framework. There is a strong focus on the quality of the framework and the current implementation has already been successfully applied within the Command and Control environment. The current framework implementation is also supplied on a DVD with this dissertation. / Dissertation (MEng)--University of Pretoria, 2011. / Electrical, Electronic and Computer Engineering / unrestricted
173

Avaliação de qualidade em aplicativos educacionais móveis / Quality evaluation of mobile learning applications

Gustavo Willians Soad 21 June 2017 (has links)
Estudos indicam que a utilização de aplicativos educacionais móveis vêm crescendo continuamente, possibilitando a alunos e professores maior flexibilidade e comodidade na execução de atividades e práticas educacionais. Embora várias instituições já tenham aderido à modalidade de aprendizagem móvel (m-learning), sua adoção ainda traz problemas e desafios organizacionais, culturais e tecnológicos. Um destes problemas consiste em como avaliar adequadamente a qualidade dos aplicativos educacionais desenvolvidos. De fato, os métodos existentes para avaliação da qualidade de software ainda são muito genéricos, não contemplando aspectos específicos aos contextos pedagógico e móvel. Nesse cenário, o presente trabalho apresenta o método MoLEva, desenvolvido para avaliar a qualidade de aplicativos educacionais móveis. O método tem como base a norma ISO/IEC 25000, sendo composto por: (i) modelo de qualidade; (ii) métricas; e (iii) critérios de julgamento. Para validar o método, foram realizados dois estudos de caso; o primeiro consistiu na aplicação do MoLEva para avaliar o aplicativo do ENEM; o segundo consistiu na aplicação do método para avaliação de aplicativos para o ensino de idiomas. A partir dos resultados obtidos, foi possível identificar problemas e pontos de melhoria nos aplicativos avaliados. Além disso, os estudos de caso conduzidos forneceram bons indicativos a respeito da viabilidade de uso do método MoLEva na avaliação de aplicativos educacionais móveis. / Studies indicate that the use of mobile learning applications has grown continuously, allowing students and teachers greater flexibility and convenience in the execution of educational activities and practices. Although several institutions have already adhered to the mobile learning (m-learning) modality, their adoption still brings organizational, cultural and technological problems and challenges. One of these problems is how to adequately evaluated the quality of the mobile learning applications developed. In fact, existing methods for evaluating software quality are still very generic, not considering aspects specific to the pedagogical and mobile contexts. In this scenario, the present work presents the MoLEva method, developed to evaluate the quality of mobile learning applications. The method is based on the ISO / IEC 25000 standard, being composed of: (i) quality model; (ii) metrics; and (iii) criteria of judgment. To validate the method, two case studies were performed; the first consisted of applying MoLEva to evaluate the ENEM application; the second consisted of applying the method for evaluating applications for language teaching. From the obtained results, it was possible to identify problems and improvement points in the evaluated applications. In addition, the case studies conducted provided good indications regarding the feasibility of using the MoLEva method in evaluating mobile learning applications.
174

Challenges of Large-ScaleSoftware Testing and the Role of Quality Characteristics : Empirical Study

Belay, Eyuel January 2020 (has links)
Currently, information technology is influencing every walks of life. Our livesincreasingly depend on the software and its functionality. Therefore, thedevelopment of high-quality software products is indispensable. Also, inrecent years, there has been an increasing interest in the demand for high-qualitysoftware products. The delivery of high-quality software products and services isnot possible at no cost. Furthermore, software systems have become complex andchallenging to develop, test, and maintain because of scalability. Therefore, withincreasing complexity in large scale software development, testing has been acrucial issue affecting the quality of software products. In this paper, large-scalesoftware testing challenges concerning quality and their respective mitigations arereviewed using a systematic literature review, and interviews. Existing literatureregarding large-scale software development deals with issues such as requirementand security challenges, so research regarding large-scale software testing and itsmitigations is not dealt with profoundly.In this study, a total of 2710 articles were collected from 1995-2020; 1137(42%)IEEE, 733(27%) Scopus, and 840(31%) Web of Science. Sixty-four relevant articleswere selected using a systematic literature review. Also, to include missed butrelevant articles, snowballing techniques were applied, and 32 additional articleswere included. A total of 81 challenges of large-scale software testing wereidentified from 96 total articles out of which 32(40%) performance, 10(12 %)security, 10(12%) maintainability, 7(9 %) reliability, 6(8%) compatibility, 10(12%)general, 3(4%) functional suitability, 2(2%) usability, and 1(1%) portability weretesting challenges were identified. The author identified more challenges mainlyabout performance, security, reliability, maintainability, and compatibility qualityattributes but few challenges about functional suitability, portability, and usability.The result of the study can be used as a guideline in large-scale software testingprojects to pinpoint potential challenges and act accordingly.
175

Investigating the applicability of Software Metrics and Technical Debt on X++ Abstract Syntax Tree in XML format : calculations using XQuery expressions

Tran, David January 2019 (has links)
This thesis investigates how XML representation of X++ abstract syntax trees (AST) residing in an XML database can be subject to static code analysis. Microsoft Dynamics 365 for Finance & Operations comprises a large and complex corpus of X++ source code and intuitive ways of visualizing and analysing the state of the code base in terms of software metrics and technical debt are non-existent. A solution is to extend an internal web application and semantic search tool called SocrateX, to calculate software metrics and technical debt. This is done by creating a web service to construct XQuery and XPath code to be queried to the XML database. The values are stored in a relational database and imported to Power BI for intuitive visualization. Software metrics have been chosen based on the amount of previous research and compatibility with the X++ AST, whereas technical debt has been estimated using the SQALE method. This thesis concludes that XML representations of X++ abstract syntax trees are viable candidates for measuring quality of source codes with the use of functional query programming languages.
176

Generation of Software Test Data from the Design Specification Using Heuristic Techniques. Exploring the UML State Machine Diagrams and GA Based Heuristic Techniques in the Automated Generation of Software Test Data and Test Code.

Doungsa-ard, Chartchai January 2011 (has links)
Software testing is a tedious and very expensive undertaking. Automatic test data generation is, therefore, proposed in this research to help testers reduce their work as well as ascertain software quality. The concept of test driven development (TDD) has become increasingly popular during the past several years. According to TDD, test data should be prepared before the beginning of code implementation. Therefore, this research asserts that the test data should be generated from the software design documents which are normally created prior to software code implementation. Among such design documents, the UML state machine diagrams are selected as a platform for the proposed automated test data generation mechanism. Such diagrams are selected because they show behaviours of a single object in the system. The genetic algorithm (GA) based approach has been developed and applied in the process of searching for the right amount of quality test data. Finally, the generated test data have been used together with UML class diagrams for JUnit test code generation. The GA-based test data generation methods have been enhanced to take care of parallel path and loop problems of the UML state machines. In addition the proposed GA-based approach is also targeted to solve the diagrams with parameterised triggers. As a result, the proposed framework generates test data from the basic state machine diagram and the basic class diagram without any additional nonstandard information, while most other approaches require additional information or the generation of test data from other formal languages. The transition coverage values for the introduced approach here are also high; therefore, the generated test data can cover most of the behaviour of the system. / EU Asia-Link project TH/Asia Link/004(91712) East-West and CAMT
177

Contributions to the usability of Sorald for repairing static analysis violations / En studie om Soralds användarvänlighet för reparation av regelbrott i statisk kodanalys

Luong Phu, Henry January 2021 (has links)
Automated static analysis tools are important in modern software quality assurance. These tools scan the input source or binary code for a set of rules to detect functional or maintainability problems and then warn developers about the found rule violations. Then, developers analyze and possibly repair the rule violations in a manual procedure, which can be time-consuming. Since human effort is costly, automated solutions for repairing rule violations would play an important role in software development. In a previous work, a tool named Sorald was developed to automatically repair rule violations generated by the static analyzer SonarJava. However, there is a lack of reliability of Sorald in generating patches and also a lack of automation for the usage of Sorald by developers. Therefore, in this work, solutions are proposed to improve the usability of Sorald. First, a new strategy of source code analysis and repair was introduced in Sorald, which allows Sorald to deliver a fix even when an internal failure occurs in Sorald. Second, Sorald was integrated into a repair bot, named Repairnator, which was then integrated into the Jenkins continuous integration service. This allows Sorald to be automatically executed in continuous integration builds and its generated patches to be automatically proposed to developers on GitHub. As an evaluation of the proposed solutions, Sorald was executed and monitored on 28 open-source projects hosted on GitHub. The results show that the new repair strategy improves the performance of Sorald in terms of the number of fixes, while the repair time remains mostly unchanged when compared with the default repair strategy. Moreover, the total repair time of Sorald for the 15 supported SonarJava rules is within the continuous integration time of the analyzed projects, which means that it is feasible to repair projects with Sorald in such an environment. Finally, most Sorald patches are compilable and usually accepted without negative comments by developers, once there exists a reaction on the proposed GitHub pull requests. In conclusion, the contributions of this work improve the overall usability of Sorald as an automated software repair tool. / Automatiserade statiska analysverktyg är viktiga för modern kvalitetssäkring inom mjukvaruutveckling. Dessa verktyg skannar ingångskällan eller binärkoden för en uppsättning regler för att upptäcka funktions- eller underhållsproblem och varnar sedan utvecklare om de upptäcker några regelbrott. Utvecklare som äger den analyserad kodebasen, granskar sedan dessa regelbrott och eventuellt reparerar dem i en manuell procedur, vilket kan vara tidskrävande. Eftersom mänskliga ansträngningar är kostsamma skulle automatiserade lösningar för att reparera dessa regelbrott spela en viktig roll i programvaruutveckling. I ett tidigare arbete utvecklades ett verktyg som heter Sorald för att automatiskt reparera regelbrott som genererats av den statiska analysatorn SonarJava. Det finns dock brist på tillförlitlighet hos Sorald när det gäller att generera korrigeringsfiler och brist på automatisering för utvecklingen av Sorald. Därför föreslås i detta arbete lösningar för att förbättra Soralds användbarhet. Först introducerades en ny strategi för källkodsanalys och reparation i Sorald, som gör det möjligt för Sorald att leverera en fix även när ett internt fel inträffar i Sorald. För det andra integrerades Sorald i en reparationsbot, namnet Repairnator, som sedan integrerades i Jenkins kontinuerliga integrationstjänst. Detta gör att Sorald kan köras automatiskt i kontinuerliga integrationsbyggnader och dessa genererade korrigeringar automatiskt föreslås för utvecklare på GitHub. Som en utvärdering av de föreslagna lösningarna utfördes och övervakades Sorald på 28 öppen källkodsprojekt värd på GitHub. Resultaten visar att den nya reparationsstrategin förbättrar prestationen för Sorald när det gäller antalet korrigeringar, medan reparationstiden förblir oförändrad jämfört med standardreparationsstrategin. Dessutom ligger den totala reparationstiden för Sorald för de 15 stödda SonarJava-reglerna inom den kontinuerliga integrationstiden för de analyserade projekten, vilket innebär att det är möjligt att reparera projekt med Sorald i en sådan miljö. Slutligen är de flesta Sorald-korrigeringar sammanställbara och accepteras vanligtvis utan negativa kommentarer från utvecklare, när det finns en reaktion på de föreslagna GitHub-förfrågningarna. Sammanfattningsvis förbättrar bidraget från detta arbete Soralds övergripande användbarhet som ett automatiskt verktyg för reparation av programvara.
178

INVESTIGATING COMMON PERCEPTIONS OF SOFTWARE ENGINEERING METHODS APPLIED TO SCIENTIFIC COMPUTING SOFTWARE

Srinivasan, Malavika January 2018 (has links)
Scientific Computing (SC) software has significant societal impact due to its application in safety related domains, such as nuclear, aerospace, military, and medicine. Unfortunately, recent research has shown that SC software does not always achieve the desired software qualities, like maintainability, reusability, and reproducibility. Software Engineering (SE) practices have been shown to improve software qualities, but SC developers, who are often the scientists themselves, often fail to adopt SE practices because of the time commitment. To promote the application of SE in SC, we conducted a case study in which we developed new SC software. The software, we developed will be used in predicting the nature of solidification in a casting process to facilitate the reduction of expensive defects in parts. During the development process, we adopted SE practices and involved the scientists from the beginning. We interviewed the scientists before and after software development, to assess their attitude towards SE for SC. The interviews revealed a positive response towards SE for SC. In the post development interview, scientists had a change in their attitudes towards SE for SC and were willing to adopt all the SE approaches that we followed. However, when it comes to producing software artifacts, they felt overburdened and wanted more tools to reduce the time commitment and to reduce complexity. While contrasting our experience with the currently held perceptions of scientific software development, we had the following observations: a) Observations that agree with the existing literature: i) working on something that the scientists are interested in is not enough to promote SE practices, ii) maintainability is a secondary consideration for scientific partners, iii) scientists are hesitant to learn SE practices, iv) verification and validation are challenging in SC, v) scientists naturally follow agile methodologies, vi) common ground for communication has always been a problem, vii) an interdisciplinary team is essential, viii) scientists tend to choose programming language based on their familiarity, ix) scientists prefer to use plots to visualize, verify and understand their science, x) early identification of test cases is advantageous, xi) scientists have a positive attitude toward issue trackers, xii) SC software should be designed for change, xiii) faking a rational design process for documentation is advisable for SC, xiv) Scientists prefer informal, collegial knowledge transfer, to reading documentation, b) Observations that disagree with the existing literature: i) When unexpected results were obtained, our scientists chose to change the numerical algorithms, rather than question their scientific theories, ii) Documentation of up-front requirements is feasible for SC We present the requirement specification and design documentation for our software as an evidence that with proper abstraction and application of “faked rational design process”, it is possible to document up-front requirements and improve quality. / Thesis / Master of Science (MSc)
179

A comparative study of three ICT network programs using usability testing

Van der Linde, P.L. January 2013 (has links)
Thesis (M. Tech. (Information Technology)) -- Central University of technology, Free State, 2013 / This study compared the usability of three Information and Communication Technology (ICT) network programs in a learning environment. The researcher wanted to establish which program was most adequate from a usability perspective among second-year Information Technology (IT) students at the Central University of Technology (CUT), Free State. The Software Usability Measurement Inventory (SUMI) testing technique can measure software quality from a user perspective. The technique is supported by an extensive reference database to measure a software product’s quality in use and is embedded in an effective analysis and reporting tool called SUMI scorer (SUMISCO). SUMI was applied in a controlled laboratory environment where second-year IT students of the CUT, utilized SUMI as part of their networking subject, System Software 1 (SPG1), to evaluate each of the three ICT network programs. The results, strengths and weaknesses, as well as usability improvements, as identified by SUMISCO, are discussed to determine the best ICT network program from a usability perspective according to SPG1 students.
180

Capturing, Eliciting, and Prioritizing (CEP) Non-Functional Requirements Metadata during the Early Stages of Agile Software Development

Maiti, Richard Rabin 01 January 2016 (has links)
Agile software engineering has been a popular methodology to develop software rapidly and efficiently. However, the Agile methodology often favors Functional Requirements (FRs) due to the nature of agile software development, and strongly neglects Non-Functional Requirements (NFRs). Neglecting NFRs has negative impacts on software products that have resulted in poor quality and higher cost to fix problems in later stages of software development. This research developed the CEP “Capture Elicit Prioritize” methodology to effectively gather NFRs metadata from software requirement artifacts such as documents and images. Artifact included the Optical Character Recognition (OCR) artifact which gathered metadata from images. The other artifacts included: Database Artifact, NFR Locator Plus, NFR Priority Artifact, and Visualization Artifact. The NFRs metadata gathered reduced false positives to include NFRs in the early stages of software requirements gathering along with FRs. Furthermore, NFRs were prioritized using existing FRs methodologies which are important to stakeholders as well as software engineers in delivering quality software. This research built on prior studies by specifically focusing on NFRs during the early stages of agile software development. Validation of the CEP methodology was accomplished by using the 26 requirements of the European Union (EU) eProcurement System. The NORMAP methodology was used as a baseline. In addition, the NERV methodology baseline results were used for comparison. The research results show that the CEP methodology successfully identified NFRs in 56 out of 57 requirement sentences that contained NFRs compared to 50 of the baseline and 55 of the NERV methodology. The results showed that the CEP methodology was successful in eliciting 98.24% of the baseline compared to the NORMAP methodology of 87.71%. This represents an improvement of 10.53% compared to the baseline results. of The NERV methodology result was 96.49% which represents an improvement of 1.75% for CEP. The CEP methodology successfully elicited 86 out of 88 NFR compared to the baseline NORMAP methodology of 75 and NERV methodology of 82. The NFR count elicitation success for the CEP methodology was 97.73 % compared to NORMAP methodology of 85.24 %which is an improvement of 12.49%. Comparison to the NERV methodology of 93.18%, CEP has an improvement of 4.55%. CEP methodology utilized the associated NFR Metadata (NFRM)/Figures/images and linked them to the related requirements to improve over the NORMAP and NERV methodologies. There were 29 baseline NFRs that were found in the associated Figures/images (NFRM) and 129 NFRs were both in the requirement sentence and the associated Figure/images (NFRM). Another goal of this study was to improve the prioritization of NFRs compared to prior studies. This research provided effective techniques to prioritize NFRs during the early stages of agile software development and the impacts that NFRs have on the software development process. The CEP methodology effectively prioritized NFRs by utilizing the αβγ-framework in a similarly way to FRs. The sub-process of the αβγ-framework was modified in a way that provided a very attractive feature to agile team members. Modification allowed the replacement of parts of the αβγ-framework to suit the team’s specific needs in prioritizing NFRs. The top five requirements based on NFR prioritization were the following: 12.3, 24.5, 15.3, 7.5, and 7.1. The prioritization of NFRs fit the agile software development cycle and allows agile developers and members to plan accordingly to accommodate time and budget constraints.

Page generated in 0.0729 seconds