• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Lightweight Approach of Human-Like Playtest for Android Apps

Zhao, Yan 01 February 2022 (has links)
Testing is recognized as a key and challenging factor that can either boost or halt the game development in the mobile game industry. On one hand, manual testing is expensive and time-consuming, especially the wide spectrum of device hardware and software, so called fragmentation, significantly increases the cost to test applications on devices manually. On the other hand, automated testing is also very difficult due to more inherent technical issues to test games as compared to other mobile applications, such as non-native widgets, nondeterminism , complex game strategies and so on. Current testing frameworks (e.g., Android Monkey, Record and Replay) are limited because they adopt no domain knowledge to test games. Learning-based tools (e.g., Wuji) require tremendous resources and manual efforts to train a model before testing any game. The high cost of manual testing and lack of efficient testing tools for mobile games motivated the work presented in this thesis which aims to explore easy and efficient approaches to test mobile games efficiently and effectively. A new Android mobile game testing tool, called LIT, has been developed. LIT is a lightweight approach to generalize playtest tactics from manual testing, and to adopt the tactics for automatic game testing. LIT has two phases: tactic generalization and tactic concretization. In Phase I, when a human tester plays an Android game G for awhile (e.g., eight minutes), LIT records the tester's inputs and related scenes. Based on the collected data, LIT infers a set of context-aware, abstract playtest tactics that describe under what circumstances, what actions can be taken. In Phase II,LIT tests G based on the generalized tactics. Namely, given a randomly generated game scene, LIT tentatively matches that scene with the abstract context of any inferred tactic; if the match succeeds, LIT customizes the tactic to generate an action for playtest. Our evaluation with nine games shows LIT to outperform two state-of-the-art tools and are reinforcement learning (RL)-based tool, by covering more code and triggering more errors. This implies that LIT complements existing tools and helps developers better test certain games (e.g., match3). / Master of Science / Testing is recognized as a key and challenging factor that can either boost or halt the game development in mobile game industry. On the one hand, manual testing is expensive and time-consuming, especially the wide spectrum of device hardware and software significantly increase cost to test applications on devices manually. On the other hand, automated testing is also very difficult due to more inherent technical issues to test games as compared to other mobile applications. The two factors motivated the work presented in this thesis. A new Android mobile game testing tool, called LIT, has been developed. LIT is a light weight approach to generalize playtest tactics from manual testing, and to adopt the tactics for automatic game testing. A playtest is the process in which testers play video games for software quality assurance. When a human tester plays an Android game G for awhile (e.g., eight minutes),LIT records the tester's inputs and related scenes. Based on the collected data, LIT infers a set of context-aware, abstract playtest tactics that describe under what circumstances, what actions can be taken. In Phase II, LIT tests G based on the generalized tactics. Namely, given a randomly generated game scene, LIT tentatively matches that scene with the abstract context of any inferred tactic; if the match succeeds, LIT customizes the tactic to generate an action for playtest. Our evaluation with nine games shows LIT to outperform two state-of-the-art tools and a reinforcement learning (RL)-based tool, by covering more code and triggering more errors. This implies that LIT complements existing tools and helps developers better test certain games (e.g., match3)

Page generated in 0.1051 seconds