• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 76
  • 13
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 253
  • 253
  • 81
  • 80
  • 66
  • 44
  • 40
  • 37
  • 37
  • 36
  • 35
  • 32
  • 28
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Investigating the applicability of Software Metrics and Technical Debt on X++ Abstract Syntax Tree in XML format : calculations using XQuery expressions

Tran, David January 2019 (has links)
This thesis investigates how XML representation of X++ abstract syntax trees (AST) residing in an XML database can be subject to static code analysis. Microsoft Dynamics 365 for Finance & Operations comprises a large and complex corpus of X++ source code and intuitive ways of visualizing and analysing the state of the code base in terms of software metrics and technical debt are non-existent. A solution is to extend an internal web application and semantic search tool called SocrateX, to calculate software metrics and technical debt. This is done by creating a web service to construct XQuery and XPath code to be queried to the XML database. The values are stored in a relational database and imported to Power BI for intuitive visualization. Software metrics have been chosen based on the amount of previous research and compatibility with the X++ AST, whereas technical debt has been estimated using the SQALE method. This thesis concludes that XML representations of X++ abstract syntax trees are viable candidates for measuring quality of source codes with the use of functional query programming languages.
172

Generation of Software Test Data from the Design Specification Using Heuristic Techniques. Exploring the UML State Machine Diagrams and GA Based Heuristic Techniques in the Automated Generation of Software Test Data and Test Code.

Doungsa-ard, Chartchai January 2011 (has links)
Software testing is a tedious and very expensive undertaking. Automatic test data generation is, therefore, proposed in this research to help testers reduce their work as well as ascertain software quality. The concept of test driven development (TDD) has become increasingly popular during the past several years. According to TDD, test data should be prepared before the beginning of code implementation. Therefore, this research asserts that the test data should be generated from the software design documents which are normally created prior to software code implementation. Among such design documents, the UML state machine diagrams are selected as a platform for the proposed automated test data generation mechanism. Such diagrams are selected because they show behaviours of a single object in the system. The genetic algorithm (GA) based approach has been developed and applied in the process of searching for the right amount of quality test data. Finally, the generated test data have been used together with UML class diagrams for JUnit test code generation. The GA-based test data generation methods have been enhanced to take care of parallel path and loop problems of the UML state machines. In addition the proposed GA-based approach is also targeted to solve the diagrams with parameterised triggers. As a result, the proposed framework generates test data from the basic state machine diagram and the basic class diagram without any additional nonstandard information, while most other approaches require additional information or the generation of test data from other formal languages. The transition coverage values for the introduced approach here are also high; therefore, the generated test data can cover most of the behaviour of the system. / EU Asia-Link project TH/Asia Link/004(91712) East-West and CAMT
173

Contributions to the usability of Sorald for repairing static analysis violations / En studie om Soralds användarvänlighet för reparation av regelbrott i statisk kodanalys

Luong Phu, Henry January 2021 (has links)
Automated static analysis tools are important in modern software quality assurance. These tools scan the input source or binary code for a set of rules to detect functional or maintainability problems and then warn developers about the found rule violations. Then, developers analyze and possibly repair the rule violations in a manual procedure, which can be time-consuming. Since human effort is costly, automated solutions for repairing rule violations would play an important role in software development. In a previous work, a tool named Sorald was developed to automatically repair rule violations generated by the static analyzer SonarJava. However, there is a lack of reliability of Sorald in generating patches and also a lack of automation for the usage of Sorald by developers. Therefore, in this work, solutions are proposed to improve the usability of Sorald. First, a new strategy of source code analysis and repair was introduced in Sorald, which allows Sorald to deliver a fix even when an internal failure occurs in Sorald. Second, Sorald was integrated into a repair bot, named Repairnator, which was then integrated into the Jenkins continuous integration service. This allows Sorald to be automatically executed in continuous integration builds and its generated patches to be automatically proposed to developers on GitHub. As an evaluation of the proposed solutions, Sorald was executed and monitored on 28 open-source projects hosted on GitHub. The results show that the new repair strategy improves the performance of Sorald in terms of the number of fixes, while the repair time remains mostly unchanged when compared with the default repair strategy. Moreover, the total repair time of Sorald for the 15 supported SonarJava rules is within the continuous integration time of the analyzed projects, which means that it is feasible to repair projects with Sorald in such an environment. Finally, most Sorald patches are compilable and usually accepted without negative comments by developers, once there exists a reaction on the proposed GitHub pull requests. In conclusion, the contributions of this work improve the overall usability of Sorald as an automated software repair tool. / Automatiserade statiska analysverktyg är viktiga för modern kvalitetssäkring inom mjukvaruutveckling. Dessa verktyg skannar ingångskällan eller binärkoden för en uppsättning regler för att upptäcka funktions- eller underhållsproblem och varnar sedan utvecklare om de upptäcker några regelbrott. Utvecklare som äger den analyserad kodebasen, granskar sedan dessa regelbrott och eventuellt reparerar dem i en manuell procedur, vilket kan vara tidskrävande. Eftersom mänskliga ansträngningar är kostsamma skulle automatiserade lösningar för att reparera dessa regelbrott spela en viktig roll i programvaruutveckling. I ett tidigare arbete utvecklades ett verktyg som heter Sorald för att automatiskt reparera regelbrott som genererats av den statiska analysatorn SonarJava. Det finns dock brist på tillförlitlighet hos Sorald när det gäller att generera korrigeringsfiler och brist på automatisering för utvecklingen av Sorald. Därför föreslås i detta arbete lösningar för att förbättra Soralds användbarhet. Först introducerades en ny strategi för källkodsanalys och reparation i Sorald, som gör det möjligt för Sorald att leverera en fix även när ett internt fel inträffar i Sorald. För det andra integrerades Sorald i en reparationsbot, namnet Repairnator, som sedan integrerades i Jenkins kontinuerliga integrationstjänst. Detta gör att Sorald kan köras automatiskt i kontinuerliga integrationsbyggnader och dessa genererade korrigeringar automatiskt föreslås för utvecklare på GitHub. Som en utvärdering av de föreslagna lösningarna utfördes och övervakades Sorald på 28 öppen källkodsprojekt värd på GitHub. Resultaten visar att den nya reparationsstrategin förbättrar prestationen för Sorald när det gäller antalet korrigeringar, medan reparationstiden förblir oförändrad jämfört med standardreparationsstrategin. Dessutom ligger den totala reparationstiden för Sorald för de 15 stödda SonarJava-reglerna inom den kontinuerliga integrationstiden för de analyserade projekten, vilket innebär att det är möjligt att reparera projekt med Sorald i en sådan miljö. Slutligen är de flesta Sorald-korrigeringar sammanställbara och accepteras vanligtvis utan negativa kommentarer från utvecklare, när det finns en reaktion på de föreslagna GitHub-förfrågningarna. Sammanfattningsvis förbättrar bidraget från detta arbete Soralds övergripande användbarhet som ett automatiskt verktyg för reparation av programvara.
174

INVESTIGATING COMMON PERCEPTIONS OF SOFTWARE ENGINEERING METHODS APPLIED TO SCIENTIFIC COMPUTING SOFTWARE

Srinivasan, Malavika January 2018 (has links)
Scientific Computing (SC) software has significant societal impact due to its application in safety related domains, such as nuclear, aerospace, military, and medicine. Unfortunately, recent research has shown that SC software does not always achieve the desired software qualities, like maintainability, reusability, and reproducibility. Software Engineering (SE) practices have been shown to improve software qualities, but SC developers, who are often the scientists themselves, often fail to adopt SE practices because of the time commitment. To promote the application of SE in SC, we conducted a case study in which we developed new SC software. The software, we developed will be used in predicting the nature of solidification in a casting process to facilitate the reduction of expensive defects in parts. During the development process, we adopted SE practices and involved the scientists from the beginning. We interviewed the scientists before and after software development, to assess their attitude towards SE for SC. The interviews revealed a positive response towards SE for SC. In the post development interview, scientists had a change in their attitudes towards SE for SC and were willing to adopt all the SE approaches that we followed. However, when it comes to producing software artifacts, they felt overburdened and wanted more tools to reduce the time commitment and to reduce complexity. While contrasting our experience with the currently held perceptions of scientific software development, we had the following observations: a) Observations that agree with the existing literature: i) working on something that the scientists are interested in is not enough to promote SE practices, ii) maintainability is a secondary consideration for scientific partners, iii) scientists are hesitant to learn SE practices, iv) verification and validation are challenging in SC, v) scientists naturally follow agile methodologies, vi) common ground for communication has always been a problem, vii) an interdisciplinary team is essential, viii) scientists tend to choose programming language based on their familiarity, ix) scientists prefer to use plots to visualize, verify and understand their science, x) early identification of test cases is advantageous, xi) scientists have a positive attitude toward issue trackers, xii) SC software should be designed for change, xiii) faking a rational design process for documentation is advisable for SC, xiv) Scientists prefer informal, collegial knowledge transfer, to reading documentation, b) Observations that disagree with the existing literature: i) When unexpected results were obtained, our scientists chose to change the numerical algorithms, rather than question their scientific theories, ii) Documentation of up-front requirements is feasible for SC We present the requirement specification and design documentation for our software as an evidence that with proper abstraction and application of “faked rational design process”, it is possible to document up-front requirements and improve quality. / Thesis / Master of Science (MSc)
175

A comparative study of three ICT network programs using usability testing

Van der Linde, P.L. January 2013 (has links)
Thesis (M. Tech. (Information Technology)) -- Central University of technology, Free State, 2013 / This study compared the usability of three Information and Communication Technology (ICT) network programs in a learning environment. The researcher wanted to establish which program was most adequate from a usability perspective among second-year Information Technology (IT) students at the Central University of Technology (CUT), Free State. The Software Usability Measurement Inventory (SUMI) testing technique can measure software quality from a user perspective. The technique is supported by an extensive reference database to measure a software product’s quality in use and is embedded in an effective analysis and reporting tool called SUMI scorer (SUMISCO). SUMI was applied in a controlled laboratory environment where second-year IT students of the CUT, utilized SUMI as part of their networking subject, System Software 1 (SPG1), to evaluate each of the three ICT network programs. The results, strengths and weaknesses, as well as usability improvements, as identified by SUMISCO, are discussed to determine the best ICT network program from a usability perspective according to SPG1 students.
176

Capturing, Eliciting, and Prioritizing (CEP) Non-Functional Requirements Metadata during the Early Stages of Agile Software Development

Maiti, Richard Rabin 01 January 2016 (has links)
Agile software engineering has been a popular methodology to develop software rapidly and efficiently. However, the Agile methodology often favors Functional Requirements (FRs) due to the nature of agile software development, and strongly neglects Non-Functional Requirements (NFRs). Neglecting NFRs has negative impacts on software products that have resulted in poor quality and higher cost to fix problems in later stages of software development. This research developed the CEP “Capture Elicit Prioritize” methodology to effectively gather NFRs metadata from software requirement artifacts such as documents and images. Artifact included the Optical Character Recognition (OCR) artifact which gathered metadata from images. The other artifacts included: Database Artifact, NFR Locator Plus, NFR Priority Artifact, and Visualization Artifact. The NFRs metadata gathered reduced false positives to include NFRs in the early stages of software requirements gathering along with FRs. Furthermore, NFRs were prioritized using existing FRs methodologies which are important to stakeholders as well as software engineers in delivering quality software. This research built on prior studies by specifically focusing on NFRs during the early stages of agile software development. Validation of the CEP methodology was accomplished by using the 26 requirements of the European Union (EU) eProcurement System. The NORMAP methodology was used as a baseline. In addition, the NERV methodology baseline results were used for comparison. The research results show that the CEP methodology successfully identified NFRs in 56 out of 57 requirement sentences that contained NFRs compared to 50 of the baseline and 55 of the NERV methodology. The results showed that the CEP methodology was successful in eliciting 98.24% of the baseline compared to the NORMAP methodology of 87.71%. This represents an improvement of 10.53% compared to the baseline results. of The NERV methodology result was 96.49% which represents an improvement of 1.75% for CEP. The CEP methodology successfully elicited 86 out of 88 NFR compared to the baseline NORMAP methodology of 75 and NERV methodology of 82. The NFR count elicitation success for the CEP methodology was 97.73 % compared to NORMAP methodology of 85.24 %which is an improvement of 12.49%. Comparison to the NERV methodology of 93.18%, CEP has an improvement of 4.55%. CEP methodology utilized the associated NFR Metadata (NFRM)/Figures/images and linked them to the related requirements to improve over the NORMAP and NERV methodologies. There were 29 baseline NFRs that were found in the associated Figures/images (NFRM) and 129 NFRs were both in the requirement sentence and the associated Figure/images (NFRM). Another goal of this study was to improve the prioritization of NFRs compared to prior studies. This research provided effective techniques to prioritize NFRs during the early stages of agile software development and the impacts that NFRs have on the software development process. The CEP methodology effectively prioritized NFRs by utilizing the αβγ-framework in a similarly way to FRs. The sub-process of the αβγ-framework was modified in a way that provided a very attractive feature to agile team members. Modification allowed the replacement of parts of the αβγ-framework to suit the team’s specific needs in prioritizing NFRs. The top five requirements based on NFR prioritization were the following: 12.3, 24.5, 15.3, 7.5, and 7.1. The prioritization of NFRs fit the agile software development cycle and allows agile developers and members to plan accordingly to accommodate time and budget constraints.
177

Selection and implementation of test framework for automated system test of mobile application

Shrivatri, Ankit 03 May 2016 (has links) (PDF)
Software Quality is a key concern for any companies working with software development. This is true due to the fact that the success of any software directly depends on Quality of software. It is expected that the software is of best quality for a long duration of time. With the introduction of Mobile applications the task of maintaining the quality of an application has been difficult and have faced many challenges. Many companies working with mobile application have reformed their process in order to maintain the quality of their application. The introduction of Automation testing in the test process is one such reform that have changed the face of mobile application testing in today’s world. This work deals with the concepts of Automation System testing for the mobile application which is until now a new thing and it has many things yet to be explored. The approach to automation testing is simple yet unique for the department of PT-MT/Quality Management in Robert Bosch GmbH based in Leinfelden, Stuttgart. Over here a selection and implementation of a test framework will be done for Automation testing of the mobile Applications that are being developed. For this a requirement specification document is being created which will form the basis for selecting a framework from the KT Analysis table. Finally, a framework TestComplete will be implemented for the already developed application "PLR measure&go" The implementation will include all the procedure required to set up the test framework as a part of documentation. The framework TestComplete will be used to create System test for iOS and Android operation system. Lastly the execution of test and the Result reporting is being shown as a complete process for Automation testing.
178

Generation of software test data from the design specification using heuristic techniques : exploring the UML state machine diagrams and GA based heuristic techniques in the automated generation of software test data and test code

Doungsa-ard, Chartchai January 2011 (has links)
Software testing is a tedious and very expensive undertaking. Automatic test data generation is, therefore, proposed in this research to help testers reduce their work as well as ascertain software quality. The concept of test driven development (TDD) has become increasingly popular during the past several years. According to TDD, test data should be prepared before the beginning of code implementation. Therefore, this research asserts that the test data should be generated from the software design documents which are normally created prior to software code implementation. Among such design documents, the UML state machine diagrams are selected as a platform for the proposed automated test data generation mechanism. Such diagrams are selected because they show behaviours of a single object in the system. The genetic algorithm (GA) based approach has been developed and applied in the process of searching for the right amount of quality test data. Finally, the generated test data have been used together with UML class diagrams for JUnit test code generation. The GA-based test data generation methods have been enhanced to take care of parallel path and loop problems of the UML state machines. In addition the proposed GA-based approach is also targeted to solve the diagrams with parameterised triggers. As a result, the proposed framework generates test data from the basic state machine diagram and the basic class diagram without any additional nonstandard information, while most other approaches require additional information or the generation of test data from other formal languages. The transition coverage values for the introduced approach here are also high; therefore, the generated test data can cover most of the behaviour of the system.
179

An investigation into quality assurance of the Open Source Software Development model

Otte, Tobias January 2010 (has links)
The Open Source Software Development (OSSD) model has launched products in rapid succession and with high quality, without following traditional quality practices of accepted software development models (Raymond 1999). Some OSSD projects challenge established quality assurance approaches, claiming to be successful through partial contrary techniques of standard software development. However, empirical studies of quality assurance practices for Open Source Software (OSS) are rare (Glass 2001). Therefore, further research is required to evaluate the quality assurance processes and methods within the OSSD model. The aim of this research is to improve the understanding of quality assurance practices under the OSSD model. The OSSD model is characterised by a collaborative, distributed development approach with public communication, free participation, free entry to the project for newcomers and unlimited access to the source code. The research examines applied quality assurance practices from a process view rather than from a product view. The research follows ideographic and nomothetic methodologies and adopts an antipositivist epistemological approach. An empirical research of applied quality assurance practices in OSS projects is conducted through the literature research. The survey research method is used to gain empirical evidence about applied practices. The findings are used to validate the theoretical knowledge and to obtain further expertise about practical approaches. The findings contribute to the development of a quality assurance framework for standard OSSD approaches. The result is an appropriate quality model with metrics that the requirements of the OSSD support. An ideographic approach with case studies is used to extend the body of knowledge and to assess the feasibility and applicability of the quality assurance framework. In conclusion, the study provides further understanding of the applied quality assurance processes under the OSSD model and shows how a quality assurance framework can support the development processes with guidelines and measurements.
180

[en] A COMPLIANCE AND RISK-BASED SOFTWARE DEVELOPMENT PROCESS ASSESSMENT APPROACH / [pt] UMA ABORDAGEM PARA A AVALIAÇÃO DE PROCESSOS DE DESENVOLVIMENTO DE SOFTWARE BASEADA EM RISCO E CONFORMIDADE

RAFAEL DE SOUZA LIMA ESPINHA 30 July 2007 (has links)
[pt] Atualmente, um dos principais requisitos de um projeto de desenvolvimento de software é a entrega de um produto de qualidade que obedeça ao prazo e orçamento estipulados e atenda às necessidades do cliente. Utilizando a premissa de que a qualidade do produto desenvolvido está intimamente relacionada à qualidade dos processos utilizados no seu desenvolvimento, muitas organizações investem em programas de melhoria contínua de processos, onde estes processos são constantemente avaliados e melhorados. Este trabalho propõe uma abordagem para a avaliação de processos baseada em análise do risco e da conformidade em processos de desenvolvimento. Esta abordagem é constituída por um método de avaliação em duas etapas e por uma ferramenta de apoio. Na primeira fase do método, uma avaliação em abrangência é realizada para identificar em que áreas se encontram os maiores problemas nos processos. Na segunda fase, uma avaliação mais elaborada e criteriosa é realizada apenas nas áreas críticas, diminuindo o custo e aumentando a eficiência do investimento em melhoria. A ferramenta utiliza um mecanismo de questionários e checklists para verificar o risco e a conformidade dos processos da organização. Estes questionários e checklists estão associados a uma base de conhecimento organizada segundo um modelo de maturidade ou norma de qualidade de referência. Ao final de uma avaliação são gerados relatórios, tabelas e gráficos que apóiam a tomada de decisão e orientam a elaboração de um plano de ação para a melhoria dos processos. A abordagem foi utilizada em três experimentos controlados. / [en] Nowadays, one of the main requirements of a software development project is the delivery of a quality product that conforms to the expected schedule and budget and satisfies customer needs. Using the hypothesis that the quality of the developed product is closely related to the quality of the processes used in its development, many organizations invest in process improvement programs, where the processes are continuously assessed and improved. In this work we propose an approach for process assessment based on risk and process compliance analysis. This approach is composed of a two-step appraisal method and a supporting tool. In the first step of the method, a quick analysis is executed to identify the most problematic areas. In the second one, a more elaborated analysis is performed only in the critical areas, reducing the costs and increasing the effectiveness of the appraisal. The tool uses a mechanism of surveys and checklists to verify the risk and the compliance of the process of the organization. A knowledge base is organized in accordance to a reference quality norm or maturity model. At the end of an assessment, reports, tables and charts support the decision-taking, and they can be used to guide an improvement program. The approach has been used in three case studies.

Page generated in 0.1691 seconds