This research has developed a novel method which uses an easy to deploy single dry electrode wireless electroencephalogram (EEG) collection device as an input to an automated system that measures indicators of a participant’s attentiveness while they are watching a short training video. The results are promising, including 85% or better accuracy in identifying whether a participant is watching a segment of video from a boring scene or lecture, versus a segment of video from an attentiveness inducing active lesson or memory quiz. In addition, the final system produces an ensemble average of attentiveness across many participants, pinpointing areas in the training videos that induce peak attentiveness. Qualitative analysis of the results of this research is also very promising. The system produces attentiveness graphs for individual participants and these triangulate well with the thoughts and feelings those participants had during different parts of the videos, as described in their own words. As distance learning and computer based training become more popular, it is of great interest to measure if students are attentive to recorded lessons and short training videos. This research was motivated by this interest, as well as recent advances in electronic and computer engineering’s use of biometric signal analysis for the detection of affective (emotional) response. Signal processing of EEG has proven useful in measuring alertness, emotional state, and even towards very specific applications such as whether or not participants will recall television commercials days after they have seen them. This research extended these advances by creating an automated system which measures attentiveness towards short training videos. The bulk of the research was focused on electrical and computer engineering, specifically the optimization of signal processing algorithms for this particular application. A review of existing methods of EEG signal processing and feature extraction methods shows that there is a common subdivision of the steps that are used in different EEG applications. These steps include hardware sensing filtering and digitizing, noise removal, chopping the continuous EEG data into windows for processing, normalization, transformation to extract frequency or scale information, treatment of phase or shift information, and additional post-transformation noise reduction techniques. A large degree of variation exists in most of these steps within the currently documented state of the art. This research connected these varied methods into a single holistic model that allows for comparison and selection of optimal algorithms for this application. The research described herein provided for such a structured and orderly comparison of individual signal analysis and feature extraction methods. This study created a concise algorithmic approach in examining all the aforementioned steps. In doing so, the study provided the framework for a systematic approach which followed a rigorous participant cross validation so that options could be tested, compared and optimized. Novel signal analysis methods were also developed, using new techniques to choose parameters, which greatly improved performance. The research also utilizes machine learning to automatically categorize extracted features into measures of attentiveness. The research improved existing machine learning with novel methods, including a method of using per-participant baselines with kNN machine learning. This provided an optimal solution to extend current EEG signal analysis methods that were used in other applications, and refined them for use in the measurement of attentiveness towards short training videos. These algorithms are proven to be best via selection of optimal signal analysis and optimal machine learning steps identified through both n-fold and participant cross validation. The creation of this new system which uses signal processing of EEG for the detection of attentiveness towards short training videos has created a significant advance in the field of attentiveness measuring towards short training videos.
Identifer | oai:union.ndltd.org:vcu.edu/oai:scholarscompass.vcu.edu:etd-1557 |
Date | 18 October 2013 |
Creators | Nussbaum, Paul |
Publisher | VCU Scholars Compass |
Source Sets | Virginia Commonwealth University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | © The Author |
Page generated in 0.002 seconds