Spelling suggestions: "subject:"1activity arecognition"" "subject:"1activity 2recognition""
61 |
Remote Smoker Monitoring System Incorporating Preemptive Smoking DetectionMaguire, Gabriel 01 September 2021 (has links)
No description available.
|
62 |
Spatio-temporal reasoning for semantic scene understanding and its application in recognition and prediction of manipulation actions in image sequencesZiaeetabar, Fatemeh 07 May 2019 (has links)
No description available.
|
63 |
Analysis of Data from a Smart Home Research EnvironmentGuthenberg, Patrik January 2022 (has links)
This thesis projects presents a system for gathering and using data in the context of a smarthome research enviroment. The system was developed at the Human Health and ActivityLaborty, H2Al, at Luleå University of Technology and consists of two distinct parts. First, a data export application that runs in the H2Al enviroment. This application syn-chronizes data from various sensor systems and forwards the data for further analysis. Thisanalysis was performed in the iMotions platform in order to visualize, record and export data.As a delimitation, the only sensor used was the WideFind positional system installed at theH2Al. Secondly, an activity recognition application that uses data generated from the iMotionsplatform and data export application. This includes several scripts which transforms rawdata into labeled datasets and translates them into activity recognition models with the helpof machine learning algorithms. As a delimitation, activity recognition was limited to falldetection. These fall detection models were then hosted on a basic server to test accuracyand to act as an example use case for the rest of the project. The project resulted in an effective data gathering system and was generally successful asa tool to create datasets. The iMotions platform was especially successful in both visualizingand recording data together with the data export application. The example fall detectionmodels trained showed theoretical promise, but failed to deliver good results in practice,partly due to the limitations of the positional sensor system used. Some of the conclusions drawn at the end of the project were that the data collectionprocess needed more structure, planning and input from professionals, that a better positionalsensor system may be required for better fall detection results but also that this kind of systemshows promise in the context of smart homes, especially within areas like elderly healthcare.
|
64 |
Video processing for safe food handlingChengzhang Zhong (10706937) 27 April 2021 (has links)
<p>Most
foodborne illnesses result from inappropriate food handling practices. One
proven practice to reduce pathogens is to perform effective hand-hygiene before
all stages of food handling. In food handling, there exist steps to achieve
good manufacturing practices (GMPs). Traditionally, the assessment of food
handling quality would require hiring a food expert for audit, which is
expensive in cost. Recently, recognizing activities in videos becomes a rapidly
growing field with wide-ranging applications. In this presentation, we propose
to approach the assessment of hand-hygiene quality, which is a crucial step in
food handling, with video analytic methods: action recognition and action
detection algorithms. Our approaches focus on hand-hygiene activities with
different requirements include camera views and scenario variations. </p>
<p> </p>
For hand-hygiene with egocentric video data, we create a two-stage
system to localize and recognize all the hand-hygiene actions in each untrimmed
video. This involves applying a low-cost hand mask and motion histogram
features to localize the temporal regions of hand-hygiene actions. For
hand-hygiene with multi-camera view video data, we design a system processes
untrimmed video from both egocentric and third-person cameras, and each
hand-hygiene action is recognized with its “expert” camera view. For
hand-hygiene across different scenarios, we propose a multi-modality framework
to recognize hand-hygiene actions in untrimmed video sequences. We use
modalities such as RGB, optical flow, hand segmentation mask, and human
skeleton joint modalities to construct individual CNN and apply a hierarchical
method to recognize hand-hygiene action
|
65 |
Dimensionality Reduction in Healthcare Data Analysis on Cloud PlatformRay, Sujan January 2020 (has links)
No description available.
|
66 |
How can machine learning help identify cheating behaviours in physical activity-based mobile applications?Kock, Elina, Sarwari, Yamma January 2020 (has links)
Den här studien undersöker möjligheten att använda sig utav Human Activity Recognition (HAR) i ett mobilspel, Bamblup, som använder sig utav fysiska rörelser för att upptäcka om en spelare fuskar eller om denne verkligen utför den verkliga aktiviteten. Sensordata från en accelerometer och ett gyroskop i en iPhone 7 användes för att samla data från olika människor som utförde ett antal aktiviteter utav intresse. Aktiviteterna som är utav intresse är hopp, knäböj, stampa och deras fuskmotsvarigheter, fuskhopp, fuskknäböj och fuskstampa. En sekventiell modell skapades med hjälp av det öppna programvarubiblioteket, TensorFlow. Feature Selection gjordes i programmet WEKA (Waikato Environment for Knowledge Analysis), för att välja ut attributen som var mest relevanta för klassificeringen. Dessa attribut användes för att träna modellen i TensorFlow, vilken gav en klassificeringsprecision på 66%. Fuskaktiviteterna klassificerades relativt bra, och det gjorde även stampaktiviteten. Hopp och knäböj hade lägst klassificeringsprecision med 21.43% respektive 28.57%. Dessutom testades Random Forest klassificeraren i WEKA på vårt dataset med 10-delad korsvalidering, vilket gav en klassifieringsnoggranhet på 90.47%. Våra resultat tyder på att maskininlärning är en stark kandidat för att hjälpa till att identifiera fuskbeteenden inom fysisk aktivitetsbaserade mobilspel. / This study investigates the possibility to use machine learning for Human Activity Recognition (HAR) in Bamblup, a physical activity-based game for smartphones, in order to detect whether a player is cheating or is indeed performing the required activity. Sensor data from an accelerometer and a gyroscope from an iPhone 7 was used to gather data from various people performing a set of activities. The activities of interest are jumping, squatting, stomping, and their cheating counterparts, fake jumping, fake squatting, and fake stomping. A Sequential model was created using the free open-source library TensorFlow. Feature Selection was performed using the program WEKA (Waikato Environment for Knowledge Analysis), to select the attributes which provided the most information gain. These attributes were subsequently used to train the model in TensorFlow, which gave a classification accuracy of 66%. The fake activities were classified relatively well, and so was the stomping activity. Jumping and squatting had the lowest accuracy of 21.43% and 28.57% respectively. Additionally, the Random Forest classifier in WEKA was tested on the dataset using 10-fold cross validation, providing a classification accuracy of 90.47%. Our findings imply that machine learning is a strong candidate for aiding in the detection of cheating behaviours in mobile physical activity-based games.
|
67 |
LSTM Neural Networks for Detection and Assessment of Back Pain Risk in Manual LiftingThomas, Brennan January 2021 (has links)
No description available.
|
68 |
Machine Activity Recognition with Augmented Self-trainingWang, Ruiyun January 2022 (has links)
One of the most important business elements in primary and secondary industries is monitoring their equipment. Understanding the usage of heavy logistics machinery can help in realizing the potential of these machinery and improving them. With the purpose of monitoring and quantifying machine usage, Machine Activity Recognition (MAR) problems can be solved with machine learning techniques. In this project, We propose a method of augmented Self-training which collaborates Self-training and data augmentation to solve forklift trucks' MAR problem on Controller Area Network (CAN bus) data. Compared to the standard Self-training method, the augmented Self-training performs data augmentation on pseudo-labeled data to inject noise and to improve model generalization. The best student model of the augmented Self-training achieves 71.8% balanced accuracy (BA) with improvement of 3.0% from applying Supervised Learning solely (68.8% BA). In addition, Matthews Correlation Coefficient (MCC) for the augmented Self-training's best student model reaches 0.658 with an increment of 0.031, compare to an MCC of 0.627 by only applying Supervised Learning. The augmented Self-training improves model performance.
|
69 |
Efficient Wearable Big Data Harnessing and Mining with Deep IntelligenceElijah J Basile (13161057) 27 July 2022 (has links)
<p>Wearable devices and their ubiquitous use and deployment across multiple areas of health provide key insights in patient and individual status via big data through sensor capture at key parts of the individual’s body. While small and low cost, their limitations rest in their computational and battery capacity. One key use of wearables has been in individual activity capture. For accelerometer and gyroscope data, oscillatory patterns exist between daily activities that users may perform. By leveraging spatial and temporal learning via CNN and LSTM layers to capture both the intra and inter-oscillatory patterns that appear during these activities, we deployed data sparsification via autoencoders to extract the key topological properties from the data and transmit via BLE that compressed data to a central device for later decoding and analysis. Several autoencoder designs were developed to determine the principles of system design that compared encoding overhead on the sensor device with signal reconstruction accuracy. By leveraging asymmetric autoencoder design, we were able to offshore much of the computational and power cost of signal reconstruction from the wearable to the central devices, while still providing robust reconstruction accuracy at several compression efficiencies. Via our high-precision Bluetooth voltmeter, the integrated sparsified data transmission configuration was tested for all quantization and compression efficiencies, generating lower power consumption to the setup without data sparsification for all autoencoder configurations. </p>
<p><br></p>
<p>Human activity recognition (HAR) is a key facet of lifestyle and health monitoring. Effective HAR classification mechanisms and tools can provide healthcare professionals, patients, and individuals key insights into activity levels and behaviors without the intrusive use of human or camera observation. We leverage both spatial and temporal learning mechanisms via CNN and LSTM integrated architectures to derive an optimal classification architecture that provides robust classification performance for raw activity inputs and determine that a LSTMCNN utilizing a stacked-bidirectional LSTM layer provides superior classification performance to the CNNLSTM (also utilizing a stacked-bidirectional LSTM) at all input widths. All inertial data classification frameworks are based off sensor data drawn from wearable devices placed at key sections of the body. With the limitation of wearable devices being a lack of computational and battery power, data compression techniques to limit the quantity of transmitted data and reduce the on-board power consumption have been employed. While this compression methodology has been shown to reduce overall device power consumption, this comes at a cost of more-or-less information loss in the reconstructed signals. By employing an asymmetric autoencoder design and training the LSTMCNN classifier with the reconstructed inputs, we minimized the classification performance degradation due to the wearable signal reconstruction error The classifier is further trained on the autoencoder for several input widths and with quantized and unquantized models. The performance for the classifier trained on reconstructed data ranged between 93.0\% and 86.5\% accuracy dependent on input width and autoencoder quantization, showing promising potential of deep learning with wearable sparsification. </p>
|
70 |
CLEAVER: Classification of Everyday Activities via Ensemble RecognizersHsu, Samantha 01 December 2018 (has links) (PDF)
Physical activity can have immediate and long-term benefits on health and reduce the risk for chronic diseases. Valid measures of physical activity are needed in order to improve our understanding of the exact relationship between physical activity and health. Activity monitors have become a standard for measuring physical activity; accelerometers in particular are widely used in research and consumer products because they are objective, inexpensive, and practical. Previous studies have experimented with different monitor placements and classification methods. However, the majority of these methods were developed using data collected in controlled, laboratory-based settings, which is not reliably representative of real life data. Therefore, more work is required to validate these methods in free-living settings.
For our work, 25 participants were directly observed by trained observers for two two-hour activity sessions over a seven day timespan. During the sessions, the participants wore accelerometers on the wrist, thigh, and chest. In this thesis, we tested a battery of machine learning techniques, including a hierarchical classification schema and a confusion matrix boosting method to predict activity type, activity intensity, and sedentary time in one-second intervals. To do this, we created a dataset containing almost 100 hours worth of observations from three sets of accelerometer data from an ActiGraph wrist monitor, a BioStampRC thigh monitor, and a BioStampRC chest monitor. Random forest and k-nearest neighbors are shown to consistently perform the best out of our traditional machine learning techniques. In addition, we reduce the severity of error from our traditional random forest classifiers on some monitors using a hierarchical classification approach, and combat the imbalanced nature of our dataset using a multi-class (confusion matrix) boosting method. Out of the three monitors, our models most accurately predict activity using either or both of the BioStamp accelerometers (with the exception of the chest BioStamp predicting sedentary time). Our results show that we outperform previous methods while still predicting behavior at a more granular level.
|
Page generated in 0.0827 seconds