• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 17
  • 14
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

FixEval: Execution-based Evaluation of Program Fixes for Competitive Programming Problems

Haque, Md Mahim Anjum 14 November 2023 (has links)
In a software life-cycle Source code repositories serve as vast storage areas for program code, ensuring its maintenance and version control throughout the development process. It is not uncommon for these repositories to house programs with hidden errors, which only manifest under specific input conditions, causing the program to deviate from its intended functionality. The growing intricacy of software design has amplified the time and resources required to pinpoint and rectify these issues. These errors, often unintended by developers, can be challenging to identify and correct. While there are techniques to auto-correct faulty code, the expansive realm of potential solutions for a single bug means there's a scarcity of tools and datasets for effective evaluation of the corrected code. This study presents FIXEVAL, a benchmark that includes flawed code entries from competitive coding challenges and their corresponding corrections. FIXEVAL offers an extensive test suite that not only gauges the accuracy of fixes generated by models but also allows for the assessment of a program's functional correctness. This suite further sheds light on time, memory limits, and acceptance based on specific outcomes. We utilize cutting-edge language models, trained on coding languages, as our reference point and juxtapose them using match-based (essentially token similarity) and execution-based (focusing on functional assessment) criteria. Our research indicates that while match-based criteria might not truly represent the functional precision of fixes generated by models, execution-based approaches offer a comprehensive evaluation tailored to the solution. Consequently, we posit that FIXEVAL paves the way for practical automated error correction and assessment of code generated by models. Dataset and models for all of our experiments are made publicly available at https://github.com/mahimanzum/FixEval. / Master of Science / Think of source code repositories as big digital libraries where computer programs are kept safe and updated. Sometimes, these programs have hidden mistakes that only show up under certain conditions, making the program act differently than planned which we call bugs or errors. As software gets more complex, it takes more time and effort to find and fix these mistakes. Even though there are ways to automatically fix these errors, finding the best solution can be like looking for a needle in a haystack. That's why there aren't many tools to check if the automatic fixes are right. Enter FIXEVAL: our new tool that tests and compares faulty computer code from coding competitions and their fixes. It has a set of tests to see how well the fixed code works and gives insights into its performance and results. We used the latest computer language tools to see how well they fix code, comparing them in two ways: by looking at the code's structure and by testing its function. Our findings? Just looking at the code's structure isn't enough; we need to test how it works in action. We believe FIXEVAL is a big step forward in making sure automatic code fixes are spot-on. Dataset and models for all of our experiments are made publicly available at https://github.com/mahimanzum/FixEval.
2

Performance modelling of reactive web applications using trace data from automated testing

Anderson, Michael 29 April 2019 (has links)
This thesis evaluates a method for extracting architectural dependencies and performance measures from an evolving distributed software system. The research goal was to establish methods of determining potential scalability issues in a distributed software system as it is being iteratively developed. The research evaluated the use of industry available distributed tracing methods to extract performance measures and queuing network model parameters for common user activities. Additionally, a method was developed to trace and collect system operations the correspond to these user activities utilizing automated acceptance testing. Performance measure extraction was tested across several historical releases of a real-world distributed software system with this method. The trends in performance measures across releases correspond to several scalability issues identified in the production software system. / Graduate
3

Aplicabilidade da modelagem otimizada no processo de diagramação de revista eletrônica e impressa no âmbito acadêmico

Amaral, Vinícius Rodrigues do 05 August 2016 (has links)
Submitted by Jailda Nascimento (jmnascimento@pucsp.br) on 2016-10-21T18:51:32Z No. of bitstreams: 1 Vinícius Rodrigues do Amaral.pdf: 12340469 bytes, checksum: eb42059c7a7f468161a246c5dddcc90e (MD5) / Made available in DSpace on 2016-10-21T18:51:32Z (GMT). No. of bitstreams: 1 Vinícius Rodrigues do Amaral.pdf: 12340469 bytes, checksum: eb42059c7a7f468161a246c5dddcc90e (MD5) Previous issue date: 2016-08-05 / The purpose of this study is to analyze the production of periodicals and identify the processes which may be substituted by an optimized modeling software to automate such processes totally or partially. Through a theoretical basis from the academic literature on the subject of optimized modeling, it was possible to carry out a mapping of several current periodicals present in SciElo and observe the occurrence of diagramming styles on this corpus, providing an empirical substantiation of recurring components in the layout of journals. The study follows with the critical analysis of the development of a matrix for receiving content of the articles and the analysis whether the objectives have been fulfilled in whole or in part / O propósito deste estudo é analisar a produção de periódicos e identificar os processos utilizados que podem ser substituídos por um software de modelagem otimizado que automatize totalmente ou parcialmente tais processos. Através de uma fundamentação teórica a partir da produção acadêmica na temática da modelagem otimizada, foi possível realizar um mapeamento de diversos periódicos atuais presentes na plataforma SciElo e observar a ocorrência de estilos de diagramação nesse corpus, propiciando uma fundamentação empírica dos componentes recorrentes na diagramação dos periódicos. O estudo segue com a análise crítica do desenvolvimento de uma matriz para recebimento de conteúdo dos artigos e a conclusão dos objetivos alcançados em sua totalidade ou parcialmente
4

A Factorial Experiment on Scalability of Search-based Software Testing

Mehrmand, Arash January 2009 (has links)
Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and knowing which method to use in order to generate the test data is very important. This paper discusses the performance of search-based algorithms (preferably genetic algorithm) versus random testing, in software test-data generation. A factorial experiment is designed so that, we have more than one factor for each experiment we make. Although many researches have been done in the area of automated software testing, this research differs from all of them due to sample programs (SUTs) which are used. Since the program generation is automatic as well, Grammatical Evolution is used to guide the program generations. They are not goal based, but generated according to the grammar we provide, with different levels of complexity. Genetic algorithm is first applied to programs, then we apply random testing. Based on the results which come up, this paper recommends one method to use for software testing, if the SUT has the same conditions as we had in this study. SUTs are not like the sample programs, provided by other studies since they are generated using a grammar.
5

Comparative Study of Performance Testing Tools: Apache JMeter and HP LoadRunner

Khan, Rizwan Bahrawar January 2016 (has links)
Software Testing plays a key role in Software Development. There are two approaches to software testing i.e. Manual Testing and Automated Testing which are used to detect the faults. There are numbers of automated software testing tools with different purposes but it is always a problem to select a software testing tool according to the needs. In this research, the author compares two software testing tools i.e. Apache JMeter and HP LoadRunner to determine their usability and efficiency. To compare the tools, different parameters were selected which guide the tool evaluation process. To complete the objective of the research, a scenario-based survey is conducted and two different web applications were tested. From this research, it is found that Apache JMeter got an edge over HP Loadrunner in different aspects which include installation, interface and learning.
6

Creating and Deploying Metamorphic Services for SWMM Community Based on FaaS Architecture

Lin, Xuanyi 29 September 2021 (has links)
No description available.
7

Automated Software Testing in an Embedded Real-Time System

Andersson, Johan, Andersson, Katrin January 2007 (has links)
<p>Today, automated software testing has been implemented successfully in many systems, however there does still exist relatively unexplored areas as how automated testing can be implemented in a real-time embedded system. This problem has been the foundation for the work in this master thesis, to investigate the possibility to implement an automated software testing process for the testing of an embedded real-time system at IVU Traffic Technologies AG in Aachen, Germany.</p><p>The system that has been the test object is the on board system i.box.</p><p>This report contains the result of a literature study in order to present the foundation behind the solution to the problem of the thesis. Questions answered in the study are: when to automate, how to automate and which traps should one avoid when implementing an automated software testing process in an embedded system.</p><p>The process of automating the manual process has contained steps as constructing test cases for automated testing, analysing whether an existing tool should be used or a unique test system needs to be developed. The analysis, based on the requirements on the test system, the literature study and an investigation of available test tools, lead to the development of a new test tool. Due to limited devlopement time and characterstics of the i.box, the new tool was built based on post execution evaluation. The tool was therefore divided into two parts, a part that executed the test and a part that evaluated the result. By implementing an automated test tool it has been proved that it is possible to automate the test process at system test level in the i.box.</p>
8

BadPair: a framework for automated software testing

Chang, Chien-Hsing 10 August 2010 (has links)
Testing every possible combination of the input parameter values is often impractical, inefficient or too expensive. One common alternative is pairwise testing where every pairwise combination of the parameter values is tested. Although pairwise testing significantly reduces the number of test cases, the challenge remains in analyzing the test outputs to discern the precise characteristics of parameters causing the failures. This thesis proposes a novel approach to output analysis by identifying “bad pairs”: pairs that always result in failed test cases. A framework implementing the proposed approach is presented together with three case studies. Results from the case studies suggest there are positive relationships among the numbers of failed test cases, faults, and independent bad pairs. Also, filtering of test cases seems to have a significant impact on the bad pairs identified. We believe the proposed approach can facilitate the debugging process in software testing.
9

Automated Software Testing in an Embedded Real-Time System

Andersson, Johan, Andersson, Katrin January 2007 (has links)
Today, automated software testing has been implemented successfully in many systems, however there does still exist relatively unexplored areas as how automated testing can be implemented in a real-time embedded system. This problem has been the foundation for the work in this master thesis, to investigate the possibility to implement an automated software testing process for the testing of an embedded real-time system at IVU Traffic Technologies AG in Aachen, Germany. The system that has been the test object is the on board system i.box. This report contains the result of a literature study in order to present the foundation behind the solution to the problem of the thesis. Questions answered in the study are: when to automate, how to automate and which traps should one avoid when implementing an automated software testing process in an embedded system. The process of automating the manual process has contained steps as constructing test cases for automated testing, analysing whether an existing tool should be used or a unique test system needs to be developed. The analysis, based on the requirements on the test system, the literature study and an investigation of available test tools, lead to the development of a new test tool. Due to limited devlopement time and characterstics of the i.box, the new tool was built based on post execution evaluation. The tool was therefore divided into two parts, a part that executed the test and a part that evaluated the result. By implementing an automated test tool it has been proved that it is possible to automate the test process at system test level in the i.box.
10

Revamping Binary Analysis with Sampling and Probabilistic Inference

Zhuo Zhang (16398420) 19 June 2023 (has links)
<p>Binary analysis, a cornerstone technique in cybersecurity, enables the examination of binary executables, irrespective of source code availability.</p> <p>It plays a critical role in understanding program behaviors, detecting software bugs, and mitigating potential vulnerabilities, specially in situations where the source code remains out of reach.</p> <p>However, aligning the efficacy of binary analysis with that of source-level analysis remains a significant challenge, primarily due to the uncertainty caused by the loss of semantic information during the compilation process.</p> <p><br></p> <p>This dissertation presents an innovative probabilistic approach, termed as <em>probabilistic binary analysis</em>, designed to combat the intrinsic uncertainty in binary analysis.</p> <p>It builds on the fundamental principles of program sampling and probabilistic inference, enhanced further by an iterative refinement architecture.</p> <p>The dissertation suggests that a thorough and practical method of sampling program behaviors can yield a substantial quantity of hints which could be instrumental in recovering lost information, despite the potential inclusion of some inaccuracies.</p> <p>Consequently, a probabilistic inference technique is applied to systematically incorporate and process the collected hints, suppressing the incorrect ones, thereby enabling the interpretation of high-level semantics.</p> <p>Furthermore, an iterative refinement mechanism is deployed to augment the efficiency of the probabilistic analysis in subsequent applications, facilitating the progressive enhancement of analysis outcomes through an automated or human-guided feedback loop.</p> <p><br></p> <p>This work offers an in-depth understanding of the challenges and solutions related to assessing low-level program representations and systematically handling the inherent uncertainty in binary analysis. </p> <p>It aims to contribute to the field by advancing the development of precise, reliable, and interpretable binary analysis solutions, thereby setting the groundwork for future exploration in this domain.</p>

Page generated in 0.0674 seconds