Spelling suggestions: "subject:"[een] HUMAN ACTIVITY RECOGNITION"" "subject:"[enn] HUMAN ACTIVITY RECOGNITION""
1 |
A hybrid gait recognition solution using video and ground contact informationFullenkamp, Adam M. January 2007 (has links)
Thesis (Ph.D.)--University of Delaware, 2007. / Principal faculty advisor: James G. Richards, College of Health Sciences. Includes bibliographical references.
|
2 |
A Multi-Formal Languages Collaborative Scheme for Complex Human Activity Recognition and Behavioral Patterns ExtractionAngeleas, Anargyros 06 June 2018 (has links)
No description available.
|
3 |
Detecting irregularity in videos using spatiotemporal volumes.January 2007 (has links)
Li, Yun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 68-72). / Abstracts in English and Chinese. / Abstract --- p.I / 摘要 --- p.III / Acknowledgments --- p.IV / List of Contents --- p.VI / List of Figures --- p.VII / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Visual Detection --- p.2 / Chapter 1.2 --- Irregularity Detection --- p.4 / Chapter Chapter 2 --- System Overview --- p.7 / Chapter 2.1 --- Definition of Irregularity --- p.7 / Chapter 2.2 --- Contributions --- p.8 / Chapter 2.3 --- Review of previous work --- p.9 / Chapter 2.3.1 --- Model-based Methods --- p.9 / Chapter 2.3.2 --- Statistical Methods --- p.11 / Chapter 2.4 --- System Outline --- p.14 / Chapter Chapter 3 --- Background Subtraction --- p.16 / Chapter 3.1 --- Related Work --- p.17 / Chapter 3.2 --- Adaptive Mixture Model --- p.18 / Chapter 3.2.1 --- Online Model Update --- p.20 / Chapter 3.2.2 --- Background Model Estimation --- p.22 / Chapter 3.2.3 --- Foreground Segmentation --- p.24 / Chapter Chapter 4 --- Feature Extraction --- p.28 / Chapter 4.1 --- Various Feature Descriptors --- p.29 / Chapter 4.2 --- Histogram of Oriented Gradients --- p.30 / Chapter 4.2.1 --- Feature Descriptor --- p.31 / Chapter 4.2.2 --- Feature Merits --- p.33 / Chapter 4.3 --- Subspace Analysis --- p.35 / Chapter 4.3.1 --- Principal Component Analysis --- p.35 / Chapter 4.3.2 --- Subspace Projection --- p.37 / Chapter Chapter 5 --- Bayesian Probabilistic Inference --- p.39 / Chapter 5.1 --- Estimation of PDFs --- p.40 / Chapter 5.1.1 --- K-Means Clustering --- p.40 / Chapter 5.1.2 --- Kernel Density Estimation --- p.42 / Chapter 5.2 --- MAP Estimation --- p.44 / Chapter 5.2.1 --- ML Estimation & MAP Estimation --- p.44 / Chapter 5.2.2 --- Detection through MAP --- p.46 / Chapter 5.3 --- Efficient Implementation --- p.47 / Chapter 5.3.1 --- K-D Trees --- p.48 / Chapter 5.3.2 --- Nearest Neighbor (NN) Algorithm --- p.49 / Chapter Chapter 6 --- Experiments and Conclusion --- p.51 / Chapter 6.1 --- Experiments --- p.51 / Chapter 6.1.1 --- Outdoor Video Surveillance - Exp. 1 --- p.52 / Chapter 6.1.2 --- Outdoor Video Surveillance - Exp. 2 --- p.54 / Chapter 6.1.3 --- Outdoor Video Surveillance - Exp. 3 --- p.56 / Chapter 6.1.4 --- Classroom Monitoring - Exp.4 --- p.61 / Chapter 6.2 --- Algorithm Evaluation --- p.64 / Chapter 6.3 --- Conclusion --- p.66 / Bibliography --- p.68
|
4 |
Detecting Hand-Ball Events in VideoMiller, Nicholas January 2008 (has links)
We analyze videos in which a hand interacts with a basketball. In this work, we present a computational system which detects and classifies hand-ball events, given the trajectories of a hand and ball. Our approach is to determine non-gravitational parts of the ball's motion using only the motion of the hand as a reliable cue for hand-ball events.
This thesis makes three contributions. First, we show that hand motion can be segmented using piecewise fifth-order polynomials inspired by work in motor control. We make the remarkable experimental observation that hand-ball events have a phenomenal correspondence to the segmentation breakpoints. Second, by fitting a context-dependent gravitational model to the ball over an adaptive window, we can isolate places where the hand is causing non-gravitational motion of the ball. Finally, given a precise segmentation, we use the measured velocity steps (force impulses) on the ball to detect and classify various event types.
|
5 |
Detecting Hand-Ball Events in VideoMiller, Nicholas January 2008 (has links)
We analyze videos in which a hand interacts with a basketball. In this work, we present a computational system which detects and classifies hand-ball events, given the trajectories of a hand and ball. Our approach is to determine non-gravitational parts of the ball's motion using only the motion of the hand as a reliable cue for hand-ball events.
This thesis makes three contributions. First, we show that hand motion can be segmented using piecewise fifth-order polynomials inspired by work in motor control. We make the remarkable experimental observation that hand-ball events have a phenomenal correspondence to the segmentation breakpoints. Second, by fitting a context-dependent gravitational model to the ball over an adaptive window, we can isolate places where the hand is causing non-gravitational motion of the ball. Finally, given a precise segmentation, we use the measured velocity steps (force impulses) on the ball to detect and classify various event types.
|
6 |
Automatic extraction of behavioral patterns for elderly mobility and daily routine analysisLi, Chen 08 June 2018 (has links)
The elderly living in smart homes can have their daily movement recorded and analyzed. Given the fact that different elders can have their own living habits, a methodology that can automatically identify their daily activities and discover their daily routines will be useful for better elderly care and support. In this thesis research, we focus on developing data mining algorithms for automatic detection of behavioral patterns from the trajectory data of an individual for activity identification, daily routine discovery, and activity prediction. The key challenges for the human activity analysis include the need to consider longer-range dependency of the sensor triggering events for activity modeling and to capture the spatio-temporal variations of the behavioral patterns exhibited by human. We propose to represent the trajectory data using a behavior-aware flow graph which is a probabilistic finite state automaton with its nodes and edges attributed with some local behavior-aware features. Subflows can then be extracted from the flow graph using the kernel k-means as the underlying behavioral patterns for activity identification. Given the identified activities, we propose a novel nominal matrix factorization method under a Bayesian framework with Lasso to extract highly interpretable daily routines. To better take care of the variations of activity durations within each daily routine, we further extend the Bayesian framework with a Markov jump process as the prior to incorporate the shift-invariant property into the model. For empirical evaluation, the proposed methodologies have been compared with a number of existing activity identification and daily routine discovery methods based on both synthetic and publicly available real smart home data sets with promising results obtained. In the thesis, we also illustrate how the proposed unsupervised methodology could be used to support exploratory behavior analysis for elderly care.
|
7 |
WiFi-Based Driver Activity Recognition Using CSI SignalBai, Yunhao January 2020 (has links)
No description available.
|
8 |
Trust in Human Activity Recognition Deep Learning ModelsSimons, Ama January 2021 (has links)
Trust is explored in this thesis through the analysis of the robustness of wearable device based artificial intelligence based models to changes in data acquisition. Specifically changes in wearable device hardware and different recording sessions are explored. Three human activity recognition models are used as a vehicle to explore this: Model A which is trained using accelerometer signals recorded by a wearable sensor referred to as Astroskin, Model H which is trained using accelerometer signals from a wearable sensor referred to as the BioHarness and Model A Type 1 which was trained on Astroskin accelerometer signals that was recorded on the first session of the experimental protocol. On a test set recorded by Astroskin Model A had a 99.07% accuracy. However on a test set recorded by the BioHarness Model A had a 65.74% accuracy. On a test set recorded by BioHarness Model H had a 95.37% accuracy. However on a test set recorded by Astroskin Model H had a 29.63% accuracy. Model A Type 1 an average accuracy of 99.57% on data recorded by the same wearable sensor and same session. An average accuracy of 50.95% was obtained on a test set that was recorded by the same wearable sensor but by a different session. An average accuracy of 41.31% was obtained on data that was recorded by a different wearable sensor and same session. An average accuracy of 19.28% was obtained on data that was recorded by a different wearable sensor and different session. An out of domain discriminator for Model A Type 1 was also implemented. The out of domain discriminator was able to differentiate between the data that trained Model A Type 1 and other types (data recorded by a different wearable devices/different sessions) with an accuracy of 97.60%. / Thesis / Master of Applied Science (MASc) / The trustworthiness of artificial intelligence must be explored before society can fully reap its benefits. The element of trust that is explored in this thesis is the robustness of wearable device based artificial intelligence models to changes in data acquisition. The specific changes that are explored are changes in the wearable device used to record the input data as well as input data from different recording sessions. Using human activity recognition models as a vehicle, the results show that performance degradation occurs when the wearable device is changed and when data comes from a different recording session. An out of domain discriminator is developed to alert users when a potential performance degradation can occur.
|
9 |
Handwritten signature verification using locally optimized distance-based classification.Moolla, Yaseen. 28 November 2013 (has links)
Although handwritten signature verification has been extensively researched, it has not achieved optimum accuracy rate. Therefore, efficient and accurate signature verification techniques are required since signatures are still widely used as a means of personal verification. This research work presents efficient distance-based classification techniques as an alternative to supervised learning classification techniques (SLTs). Two different feature extraction techniques were used, namely the Enhanced Modified Direction Feature (EMDF) and the Local Directional Pattern feature (LDP). These were used to analyze the effect of using several different distance-based classification techniques. Among the classification techniques used, are the cosine similarity measure, Mahalanobis, Canberra, Manhattan, Euclidean, weighted Euclidean and fractional distances. Additionally, the novel weighted fractional distances, as well as locally optimized resampling of feature vector sizes were tested. The best accuracy was achieved through applying a combination of the weighted fractional distances and locally optimized resampling classification techniques to the Local Directional Pattern feature extraction. This combination of multiple distance-based classification techniques achieved accuracy rate of 89.2% when using the EMDF feature extraction technique, and 90.8% when using the LDP feature extraction technique. These results are comparable to those in literature, where the same feature extraction techniques were classified with SLTs. The best of the distance-based classification techniques were found to produce greater accuracy than the
SLTs. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2012.
|
10 |
Trajectory AnalyticsSantiteerakul, Wasana 05 1900 (has links)
The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
|
Page generated in 0.0588 seconds