Return to search

Efficient Wearable Big Data Harnessing and Mining with Deep Intelligence

<p>Wearable devices and their ubiquitous use and deployment across multiple areas of health provide key insights in patient and individual status via big data through sensor capture at key parts of the individual’s body. While small and low cost, their limitations rest in their computational and battery capacity. One key use of wearables has been in individual activity capture. For accelerometer and gyroscope data, oscillatory patterns exist between daily activities that users may perform. By leveraging spatial and temporal learning via CNN and LSTM layers to capture both the intra and inter-oscillatory patterns that appear during these activities, we deployed data sparsification via autoencoders to extract the key topological properties from the data and transmit via BLE that compressed data to a central device for later decoding and analysis. Several autoencoder designs were developed to determine the principles of system design that compared encoding overhead on the sensor device with signal reconstruction accuracy. By leveraging asymmetric autoencoder design, we were able to offshore much of the computational and power cost of signal reconstruction from the wearable to the central devices, while still providing robust reconstruction accuracy at several compression efficiencies. Via our high-precision Bluetooth voltmeter, the integrated sparsified data transmission configuration was tested for all quantization and compression efficiencies, generating lower power consumption to the setup without data sparsification for all autoencoder configurations. </p>
<p><br></p>
<p>Human activity recognition (HAR) is a key facet of lifestyle and health monitoring. Effective HAR classification mechanisms and tools can provide healthcare professionals, patients, and individuals key insights into activity levels and behaviors without the intrusive use of human or camera observation. We leverage both spatial and temporal learning mechanisms via CNN and LSTM integrated architectures to derive an optimal classification architecture that provides robust classification performance for raw activity inputs and determine that a LSTMCNN utilizing a stacked-bidirectional LSTM layer provides superior classification performance to the CNNLSTM (also utilizing a stacked-bidirectional LSTM) at all input widths. All inertial data classification frameworks are based off sensor data drawn from wearable devices placed at key sections of the body. With the limitation of wearable devices being a lack of computational and battery power, data compression techniques to limit the quantity of transmitted data and reduce the on-board power consumption have been employed. While this compression methodology has been shown to reduce overall device power consumption, this comes at a cost of more-or-less information loss in the reconstructed signals. By employing an asymmetric autoencoder design and training the LSTMCNN classifier with the reconstructed inputs, we minimized the classification performance degradation due to the wearable signal reconstruction error The classifier is further trained on the autoencoder for several input widths and with quantized and unquantized models. The performance for the classifier trained on reconstructed data ranged between 93.0\% and 86.5\% accuracy dependent on input width and autoencoder quantization, showing promising potential of deep learning with wearable sparsification. </p>

  1. 10.25394/pgs.20382939.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/20382939
Date27 July 2022
CreatorsElijah J Basile (13161057)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Efficient_Wearable_Big_Data_Harnessing_and_Mining_with_Deep_Intelligence/20382939

Page generated in 0.0027 seconds