• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 180
  • 36
  • 17
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 513
  • 513
  • 162
  • 138
  • 126
  • 109
  • 64
  • 58
  • 58
  • 54
  • 52
  • 45
  • 43
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Testing Challenges of Mobile Augmented Reality Systems

Lehman, Sarah, 0000-0002-9466-0688 January 2022 (has links)
Augmented reality systems are ones which insert virtual content into a user’s view of the real world, in response to environmental conditions and the user’s behavior within that environment. This virtual content can take the form of visual elements such as 2D labels or 3D models, auditory cues, or even haptics; content is generated and updated based on user behavior and environmental conditions, such as the user’s location, movement patterns, and the results of computer vision or machine learning operations. AR systems are used to solve problems in a range of domains, from tourism and retail, education and healthcare, to industry and entertainment. For example, apps from Lowe’s [82] and Houzz [81] support retail transactions by scanning a user’s environment and placing product models into the space, thus allowing the user to preview what the product might look like in her home. AR systems have also proven helpful in such areas as aiding industrial assembly tasks [155, 175], helping users overcome phobias [35], and reviving interest in cultural heritage sites [163]. Mobile AR systems are ones which run on portable handheld or wearable devices, such that the user is free to move around their environment without restric- tion. Examples of such devices include smartphones, tablets, and head-mounted dis- plays. This freedom of movement and usage, in combination with the application’s reliance on computer vision and machine learning logic to provide core function- ality, make mobile AR applications very difficult to test. In addition, as demand and prevalence of machine learning logic increases, the availability and power of commercially available third-party vision libraries introduces new and easy ways for developers to violate usability and end-user privacy. The goal of this dissertation, therefore, is to understand and mitigate the challenges involved in testing mobile AR systems, given the capabilities of today’s commercially available vision and machine learning libraries. We consider three related challenge areas: application behavior during unconstrained usage conditions, general usability, and end-user privacy. To address these challenge areas, we present three research efforts. The first presents a framework for collecting application performance and usability data in the wild. The second explores how commercial vision libraries can be exploited to conduct machine learning operations without user knowledge. The third presents a framework for leveraging the environment itself to enforce privacy and access control policies for mobile AR applications. / Computer and Information Science
92

Impact-Driven Regression Test Selection for Mainframe Business Systems

Dharmapurikar, Abhishek V. 25 July 2013 (has links)
No description available.
93

Falcon: A Testing Language to Support Test Creation and Comprehension

Kuhlman, Aaron J. 05 May 2023 (has links)
No description available.
94

Examining Introductory Computer Science Student Cognition When Testing Software Under Different Test Adequacy Criteria

Shin, Austin 01 August 2022 (has links) (PDF)
The ability to test software is invaluable in all areas of computer science, but it is often neglected in computer science curricula. Test adequacy criteria (TAC), tools that measure the effectiveness of a test suite, have been used as aids to improve software testing teaching practices, but little is known about how students respond to them. Studies have examined the cognitive processes of students programming and professional developers writing tests, but none have investigated how student testers test with TAC. If we are to improve how they are used in the classroom, we must start by understanding the different ways that they affect students’ thought processes as they write tests. In this thesis, we take a grounded theory approach to reveal the underlying cognitive processes that students utilize as they test under no feedback, condition coverage, and mutation analysis. We recorded 12 students as they thought aloud while creating test suites under these feedback mechanisms, and then we analyzed these recordings to identify the thought processes they used. We present our findings in the form of the phenomena we identified, which can be further investigated to shed more light on how different TAC affect students as they write tests. i
95

Safety Critical Software - Test Coverage vs Remaining Faults

Sundell, Johan January 2022 (has links)
Safety-critical software systems have traditionally been found in the aerospace-, nuclear- andmedical domains. As technology advances and software complexity increases, such systemscan be found in more and more applications, e.g. self driving cars. These systems need to meetexceptionally strict standards in terms of dependability. Proving compliance is a challenge forthe industry. The regulatory bodies often require a certain amount of testing to be performed butdo not require evidence of a given failure rate (which for software is hard to deal with comparedto hardware). This Licentiate thesis discusses how to quantify test results and analyses whatconclusions can be drawn from a given test effort, in terms of remaining faults in the software.
96

Implementation and testing of a blackbox and a whitebox fuzzer for file compression routines

Tobkin, Toby 01 May 2013 (has links)
Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program's source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any "thinking" about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library.
97

TOOL DEVELOPMENT FOR TEST OPTIMIZATION PURPOSES

Cako, Gezim January 2021 (has links)
Background: Software testing is a crucial part of the system's development life-cycle, which pays off in detecting flaws and defects, alternatively leading to high-quality products. Generally, software testing is performed manually by a human operator or automatically. While many test cases are written and executed, the testing process checks if all the requirements are covered, and the system exhibits the expected behavior. A great portion of the cost and time of the software development is spent on testing; therefore, considering the type of the software, test optimization is needed and presented as a solution in cost efficiency and time-saving. Aim: This thesis aims to propose and evaluate the improved sOrTES+ tool for test optimization purposes, consisting of selection, prioritization, and scheduling of the test cases integrated into a dynamic user interface. Method: In this thesis, test optimization is addressed in two aspects, low-level requirements and high-level requirements. Our solution analyzes these requirements to detect the dependencies between test cases. Thus, we propose sOrTES+, a tool that uses three different scheduling techniques: Greedy, Greedy DO(direct output), and Greedy TO(total output) for test optimization. The mentioned techniques are integrated into a dynamic user interface that allows testers to manage their projects, see useful information about test cases and requirements, store the executed test cases while scheduling the remaining ones for execution, and also switch between the mentioned scheduling techniques regarding the project requirements. Finally, we demonstrated its applicability and compared our tool with existing testing techniques used by our industrial partner, Alstom company, evaluating the efficiency in terms of requirement coverage and troubleshooting time. Results: Our comparison shows that our solution improves the requirement coverage, increasing it by 26.4% while decreasing the troubleshooting time by 6%. Conclusion: Based on our results, we conclude that our proposed tool, sOrTES+, can be used for test optimization and it performs more efficiently than the existing methods used by industrial partner Alstom company.
98

Partitioning Strategies to Enhance Symbolic Execution

Marcellino, Brendan Adrian 11 August 2015 (has links)
Software testing is a fundamental part of the software development process. However, testing is still costly and consumes about half of the development cost. The path explosion problem often necessitates one to consider an extremely large number of paths in order to reach a specific target. Symbolic execution can reduce this cost by using symbolic values and heuristic exploration strategies. Although various exploration strategies have been proposed in the past, the number of Satisfiability Modulo Theories (SMT) solver calls for reaching a target is still large, resulting in longer execution times for programs containing many paths. In this paper, we present two partitioning strategies in order to mitigate this problem, consequently reducing unnecessary SMT solver calls as well. In sequential partitioning, code sections are analyzed sequentially to take advantage of infeasible paths discovered in earlier sections. On the other hand, using dynamic partitioning on SSA-applied code, the code sections are analyzed in a non-consecutive order guided by data dependency metrics within the sections. Experimental results show that both strategies can achieve significant speedup in reducing the number of unnecessary solver calls in large programs. More than 1000x speedup can be achieved in large programs over conflict-driven learning. / Master of Science
99

Predictive software design measures

Love, Randall James 11 June 2009 (has links)
This research develops a set of predictive measures enabling software testers and designers to identify and target potential problem areas for additional and/or enhanced testing. Predictions are available as early in the design process as requirements allocation and as late as code walk-throughs. These predictions are based on characteristics of the design artifacts prior to coding. Prediction equations are formed at established points in the software development process called milestones. Four areas of predictive measurement are examined at each design milestone for candidate predictive metrics. These areas are: internal complexity, information flow, defect categorization, and the change in design. Prediction equations are created from the set of candidate predictive metrics at each milestone. The most promising of the prediction equations are selected and evaluated. The single "best" prediction equation is selected at each design milestone. The resulting predictions are promising in terms of ranking areas of the software design by the number of predicted defects. Predictions of the actual number of defects are less accurate. / Master of Science
100

Measuring the Software Development Process to Enable Formative Feedback

Kazerouni, Ayaan Mehdi 16 April 2020 (has links)
Graduating CS students face well-documented difficulties upon entering the workforce, with reports of a gap between what they learn and what is expected of them in industry. Project management, software testing, and debugging have been repeatedly listed as common "knowledge deficiencies" among newly hired CS graduates. Similar difficulties manifest themselves on a smaller scale in upper-level CS courses, like the Data Structures and Algorithms course at Virginia Tech: students are required to develop large and complex projects over a three to four week lifecycle, and it is common to see close to a quarter of the students drop or fail the course, largely due to the difficult and time-consuming nature of the projects. My research is driven by the hypothesis that regular feedback about the software development process, delivered during development, will help ameliorate these difficulties. Assessment of software currently tends to focus on qualities like correctness, code coverage from test suites, and code style. Little attention or tooling has been developed for the assessment of the software development process. I use empirical software engineering methods like IDE-log analysis, software repository mining, and semi-structured interviews with students to identify effective and ineffective software practices to formulate. Using the results of these analyses, I have worked on assessing students' development in terms of time management, test writing, test quality, and other "self-checking" behaviours like running the program locally or submitting to an oracle of instructor-written test cases. The goal is to use this information to formulate formative feedback about the software development process. In addition to educators, this research is relevant to software engineering researchers and practitioners, since the results from these experiments are based on the work of upper-level students who grapple with issues of design and work-flow that are not far removed from those faced by professionals in industry. / Doctor of Philosophy / Graduating CS students face well-documented difficulties upon entering the workforce, with reports of a gap between what they learn and what is expected of them as professional soft-ware developers. Project management, software testing, and debugging have been repeatedly listed as common "knowledge deficiencies" among newly hired CS graduates. Similar difficulties manifest themselves on a smaller scale in upper-level CS courses, like the DataStructures and Algorithms course at Virginia Tech: students are required to develop large and complex software projects over a three to four week lifecycle, and it is common to see close to a quarter of the students drop or fail the course, largely due to the difficult and time-consuming nature of the projects. The development of these projects necessitates adherence to disciplined software process, i.e., incremental development, testing, and debugging of small pieces of functionality. My research is driven by the hypothesis that regular feedback about the software development process, delivered during development, will help ameliorate these difficulties. However, in educational contexts, assessment of software currently tends to focus on properties of the final product like correctness, quality of automated software tests, and adherence to code style requirements. Little attention or tooling has been developed for the assessment of the software development process. In this dissertation, I quantitatively characterise students' software development habits, using data from numerous sources: us-age logs from students' software development environments, detailed sequences of snapshots showing the project's evolution over time, and interviews with the students themselves. I analyse the relationships between students' development behaviours and their project out-comes, and use the results of these analyses to determine the effectiveness or ineffectiveness of students' software development processes. I have worked on assessing students' development in terms of time management, test writing, test quality, and other "self-checking"behaviours like running their programs locally or submitting them to an online system that uses instructor-written tests to generate a correctness score. The goal is to use this information to assess the quality of one's software development process in a way that is formative instead of summative, i.e., it can be done while students work toward project completion as opposed to after they are finished. For example, if we can identify procrastinating students early in the project timeline, we could intervene as needed and possibly help them to avoid the consequences of bad project management (e.g., unfinished or late project submissions).In addition to educators, this research is relevant to software engineering researchers and practitioners, since the results from these experiments are based on the work of upper-level students who grapple with issues of design and work-flow that are not far removed from those faced by professionals in industry.

Page generated in 0.0891 seconds