• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Does team-based testing promote individual learning?

Walker, Joshua David 08 June 2011 (has links)
Team-based testing gives students a chance to earn additional points on individual unit tests by immediately re-taking the test as a team competing against other teams. This instructional approach has enjoyed widening implementation and impressive anecdotal support, but there remains a dearth of empirical studies evaluating its prescribed processes and promoted outcomes. Although the posited effectiveness and appeal of team-based testing seem consistent with the benefits of test-enhanced learning and collaborative learning in general, several limitations are readily apparent. Namely, the current format of the individual and team readiness assurance tests is expressly multiple-choice. Though there are some advantages of this type of question (e.g., ease of administering and grading), the long-term cognitive disadvantage relative to short-answer questions is well documented. Furthermore, it is not clear whether the proposed gain in learning through this format is attributable to the group effect -- be it social or cognitive, or simply to repeated exposure to the test items. Therefore, this study measured the effects of initial test question Format (short-answer vs. multiple-choice), Mode (individual vs. group), and Exposure (once vs. twice) on four delayed measures of learning: Old multiple-choice items (ones students had initially been tested over), Old short-answer items, New multiple-choice items, and New short-answer items. Two weeks after watching a video-recorded lecture, 208 college students took a thirty-item test comprising both the old and new items in multiple-choice and short-answer formats. Results revealed that 1) taking an initial test twice is better than once when the delayed test has old short-answer items or new multiple-choice items, 2) taking an initial short-answer test is better than multiple choice when the delayed test has either old multiple-choice, old short-answer, or new multiple-choice items, and 3) taking an initial team test is no different than taking an individual test when it comes to long-term learning. Particularly noteworthy from these results is how a) the effects of short-answer tests and taking tests twice are not present within Team conditions, and b) taking a multiple-choice test twice is as effective as taking a short-answer test once. Implications are discussed in light of learning theory and instructional practice. / text
2

Towards Collaborative GUI-based Testing

Bauer, Andreas January 2023 (has links)
Context:Contemporary software development is a socio-technical activity requiring extensive collaboration among individuals with diverse expertise. Software testing is an integral part of software development that also depends on various expertise. GUI-based testing allows assessing a system’s GUI and its behavior through its graphical user interface. Collaborative practices in software development, like code reviews, not only improve software quality but also promote knowledge exchange within teams. Similar benefits could be extended to other areas of software engineering, such as GUI-based testing. However, collaborative practices for GUI-based testing necessitate a unique approach since general software development practices, perceivably, can not be directly transferred to software testing. Goal:This thesis contributes towards a tool-supported approach enabling collaborative GUI-based testing. Our distinct goals are (1) to identify processes and guidelines to enable collaboration on GUI-based testing artifacts and (2) to operationalize tool support to aid this collaboration. Method:We conducted a systematic literature review identifying code review guidelines for GUI-based testing. Further, we conducted a controlled experiment to assess the efficiency and potential usability issues of Augmented Testing. Results:We provided guidelines for reviewing GUI-based testing artifacts, which aid contributors and reviewers during code reviews. We further provide empirical evidence that Augmented Testing is not only an efficient approach to GUI-based testing but also usable for non-technical users, making it a promising subject for further research in collaborative GUI-based testing. Conclusion:Code review guidelines aid collaboration through discussions, and a suitable testing approach can serve as a platform to operationalize collaboration. Collaborative GUI-based testing has the potential to improve the efficiency and effectiveness of such testing.

Page generated in 0.111 seconds