Return to search

Measurement of interactive manual effectiveness : How do we know if our manuals are effective?

Multimedia learning is today a part of everyday life. Learning from digital sources on the internet is probably more common than printed material. The goal of this project is to determine if measuring user interaction in a interactive manual can be of use to evaluate the effectiveness of the manual. Since feedback of multimedia learning materials is costly to achieve in face-to-face interaction, automatic feedback data might be useful for evaluating and improving the quality of multimedia learning materials. In this project an interactive manual was developed for a real-world report generating application. The manual was then tested on 21 test users. Using the k-nearest neighbour machine learning algorithm the results shows that time taken on each step and the number of views on each step did not provide for good evaluation of the manual. Number of faults done by the user was good at predicting if the user would abort the manual and in combination with the number of acceptable interactions the usability data did provide for a better classification then ZeroR classification. The conclusions can be questioned by the small dataset used in this project.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:lnu-75415
Date January 2018
CreatorsStåhlberg, Henrik
PublisherLinnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), Linnaeus University
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.002 seconds