Activity classification based on sensor data is a challenging task. Many studies have focused on two main methods to enable activity classification; namely sensor level classification and body-model level classification. This study aims to enable activity classification across sensor domains by considering an e-textile garment and provide the groundwork for transferring the e-textile garment to a vision-based classifier. The framework is comprised of three main components that enable the successful transfer of the body-worn system to the vision-based classifier. The inter-class confusion of the activity space is quantified to allow an ideal prediction of known class accuracy for varying levels of error within the system. Methods for quantifying sensor and garment level error are undertaken to identify challenges specific to a body-worn system. These methods are then used to inform decisions related to the classification accuracy and threshold of the classifier. Using activities from a vision-based system known to the classifier, a user study was conducted to generate an observed set of activities from the body-worn system. The results indicate that the vision-based classifier used is user-independent and can successfully handle classification across sensor domains. / Master of Science
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/51196 |
Date | 17 January 2015 |
Creators | Dennis, Jacob Henry |
Contributors | Electrical and Computer Engineering, Martin, Thomas L., Jones, Mark T., Polys, Nicholas F. |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0021 seconds