1 |
Framework to manage labels for e-assessment of diagramsJayal, Ambikesh January 2010 (has links)
Automatic marking of coursework has many advantages in terms of resource benefits and consistency. Diagrams are quite common in many domains including computer science but marking them automatically is a challenging task. There has been previous research to accomplish this, but results to date have been limited. Much of the meaning of a diagram is contained in the labels and in order to automatically mark the diagrams the labels need to be understood. However the choice of labels used by students in a diagram is largely unrestricted and diversity of labels can be a problem while matching. This thesis has measured the extent of the diagram label matching problem and proposed and evaluated a configurable extensible framework to solve it. A new hybrid syntax matching algorithm has also been proposed and evaluated. This hybrid approach is based on the multiple existing syntax algorithms. Experiments were conducted on a corpus of coursework which was large scale, realistic and representative of UK HEI students. The results show that the diagram label matching is a substantial problem and cannot be easily avoided for the e-assessment of diagrams. The results also show that the hybrid approach was better than the three existing syntax algorithms. The results also show that the framework has been effective but only to limited extent and needs to be further refined for the semantic stage. The framework proposed in this Thesis is configurable and extensible. It can be extended to include other algorithms and set of parameters. The framework uses configuration XML, dynamic loading of classes and two design patterns namely strategy design pattern and facade design pattern. A software prototype implementation of the framework has been developed in order to evaluate it. Finally this thesis also contributes the corpus of coursework and an open source software implementation of the proposed framework. Since the framework is configurable and extensible, its software implementation can be extended and used by the research community.
|
2 |
A Collaborative Electronic Behavior Assessment System (eBA): Validation and Evaluation of FeasibilitySilvestre, Carlos E. 02 November 2018 (has links)
This study validated and evaluated the feasibility of a web-based electronic behavior assessment system, ‘eBA’, designed to facilitate collaboration between caregivers and service providers (behavior analysts) in conducting indirect functional behavior assessment (FBA). In Phase 1, the content and the web architecture of the eBA were validated and refined through a formative evaluation by five behavior analysts. In Phase 2, the eBA system was pilot tested with 10 service providers and 10 caregivers using a post-test only control group design to examine the efficiency and quality of the system and identify the levels of satisfaction with the system by the service providers and caregivers. The results indicated that the eBA system components were appropriate to conduct indirect FBA and useful for use by caregivers and service providers collaboratively, gathered quality information, and showed higher levels of caregiver and service provider satisfaction, compared to traditional paper-pencil format of assessment.
|
3 |
The development of a framework for evaluating e-assessment systemsSingh, Upasana Gitanjali 11 1900 (has links)
Academics encounter problems with the selection, evaluation, testing and implementation of e-assessment software tools. The researcher experienced these problems while adopting e-assessment at the university where she is employed. Hence she undertook this study, which is situated in schools and departments in Computing-related disciplines, namely Computer Science, Information Systems and Information Technology at South African Higher Education Institutions. The literature suggests that further research is required in this domain. Furthermore, preliminary empirical studies indicated similar disabling factors at other South African tertiary institutions, which were barriers to long-term implementation of e-assessment. Despite this, academics who are adopters of e-assessment indicate satisfaction, particularly when conducting assessments with large classes. Questions of the multiple choice genre can be assessed automatically, leading to increased productivity and more frequent assessments. The purpose of this research is to develop an evaluation framework to assist academics in determining which e-assessment tool to adopt, enabling them to make more informed decisions. Such a framework would also support evaluation of existing e-assessment systems.
The underlying research design is action research, which supported an iterative series of studies for developing, evaluating, applying, refining, and validating the SEAT (Selecting and Evaluating an e-Assessment Tool) Evaluation Framework and subsequently an interactive electronic version, e-SEAT. Phase 1 of the action research comprised Studies 1 to 3, which established the nature, context and extent of adoption of e-assessment. This set the foundation for development of SEAT in Phase 2. During Studies 4 to 6 in Phase 2, a rigorous sequence of evaluation and application facilitated the transition from the manual SEAT Framework to the electronic evaluation instrument, e-SEAT, and its further evolution.
This research resulted in both a theoretical contribution (SEAT) and a practical contribution (e-SEAT). The findings of the action research contributed, along with the literature, to the categories and criteria in the framework, which in turn, contributed to the bodies of knowledge on MCQs and e-assessment.
The final e-SEAT version, the ultimate product of this action research, is presented in Appendix J1. For easier reference, the Appendices are included on a CD, attached to the back cover of this Thesis.. / Computing / PhD. (Information Systems)
|
4 |
Akzeptanz elektronischer Befragung zur Lebensqualität in der Hausarztpraxis / Acceptance of electronic quality of life assessment in general practiceSeibert, Anna Janina 12 April 2011 (has links)
No description available.
|
5 |
The development of a framework for evaluating e-assessment systemsSingh, Upasana Gitanjali 11 1900 (has links)
Academics encounter problems with the selection, evaluation, testing and implementation of e-assessment software tools. The researcher experienced these problems while adopting e-assessment at the university where she is employed. Hence she undertook this study, which is situated in schools and departments in Computing-related disciplines, namely Computer Science, Information Systems and Information Technology at South African Higher Education Institutions. The literature suggests that further research is required in this domain. Furthermore, preliminary empirical studies indicated similar disabling factors at other South African tertiary institutions, which were barriers to long-term implementation of e-assessment. Despite this, academics who are adopters of e-assessment indicate satisfaction, particularly when conducting assessments with large classes. Questions of the multiple choice genre can be assessed automatically, leading to increased productivity and more frequent assessments. The purpose of this research is to develop an evaluation framework to assist academics in determining which e-assessment tool to adopt, enabling them to make more informed decisions. Such a framework would also support evaluation of existing e-assessment systems.
The underlying research design is action research, which supported an iterative series of studies for developing, evaluating, applying, refining, and validating the SEAT (Selecting and Evaluating an e-Assessment Tool) Evaluation Framework and subsequently an interactive electronic version, e-SEAT. Phase 1 of the action research comprised Studies 1 to 3, which established the nature, context and extent of adoption of e-assessment. This set the foundation for development of SEAT in Phase 2. During Studies 4 to 6 in Phase 2, a rigorous sequence of evaluation and application facilitated the transition from the manual SEAT Framework to the electronic evaluation instrument, e-SEAT, and its further evolution.
This research resulted in both a theoretical contribution (SEAT) and a practical contribution (e-SEAT). The findings of the action research contributed, along with the literature, to the categories and criteria in the framework, which in turn, contributed to the bodies of knowledge on MCQs and e-assessment.
The final e-SEAT version, the ultimate product of this action research, is presented in Appendix J1. For easier reference, the Appendices are included on a CD, attached to the back cover of this Thesis.. / Computing / PhD. (Information Systems)
|
6 |
Towards electronic assessment of web-based textual responsesConradie, Martha Maria 30 June 2003 (has links)
Web-based learning should move away from static transmission of instruction to dynamic pages
for effective interactive learning. Furthermore, automated assessment of learning should move
beyond rigid quizzes or multiple-choice questions.
This study describes the design, development, implementation, testing and evaluation of two
prototypes of an electronic assessment tool to enhance the effectiveness of automated
assessment. The tool was developed in the context of a distance-learning organisation and
was built according to a development research model entailing a cyclic design-intervention-outcomes
process.
The first variant, E-Grader, was developed to test an algorithm for assigning marks to open-ended
textual responses. The second variant, Web-Grader, was an interactive web-based
extension of E-Grader. It provided immediate interactive support to students as they responded
textually to content-based questions.
This multi-disciplinary study incorporates principles and techniques from software engineering,
formal computer science, database development and instructional design in the quest towards
electronic assessment of web-based textual inputs. / Computing / M.Sc. (Information Systems)
|
7 |
Towards electronic assessment of web-based textual responsesConradie, Martha Maria 30 June 2003 (has links)
Web-based learning should move away from static transmission of instruction to dynamic pages
for effective interactive learning. Furthermore, automated assessment of learning should move
beyond rigid quizzes or multiple-choice questions.
This study describes the design, development, implementation, testing and evaluation of two
prototypes of an electronic assessment tool to enhance the effectiveness of automated
assessment. The tool was developed in the context of a distance-learning organisation and
was built according to a development research model entailing a cyclic design-intervention-outcomes
process.
The first variant, E-Grader, was developed to test an algorithm for assigning marks to open-ended
textual responses. The second variant, Web-Grader, was an interactive web-based
extension of E-Grader. It provided immediate interactive support to students as they responded
textually to content-based questions.
This multi-disciplinary study incorporates principles and techniques from software engineering,
formal computer science, database development and instructional design in the quest towards
electronic assessment of web-based textual inputs. / Computing / M.Sc. (Information Systems)
|
Page generated in 0.1005 seconds