1 |
Boundaries of Visual MotionRubin, John M., Richards, W.A. 01 April 1985 (has links)
A representation of visual motion convenient for recognition shouldsmake prominent the qualitative differences among simple motions. Wesargue that the first stage in such a motion representation is to makesexplicit boundaries that we define as starts, stops, and forcesdiscontinuities. When one of these boundaries occurs in motion, humansobservers have the subjective impression that some fleeting,ssignificant event has occurred. We go farther and hypothesize that onesof the subjective motion boundaries is seen if and only if one of oursdefined boundaries occurs. We enumerate all possible motion boundariessand provide evidence that they are psychologically real.
|
2 |
Universal motion-based control and motion recognitionChen, Mingyu 13 January 2014 (has links)
In this dissertation, we propose a universal motion-based control framework that supports general functionalities on 2D and 3D user interfaces with a single integrated design. We develop a hybrid framework of optical and inertial sensing technologies to track 6-DOF (degrees of freedom) motion of a handheld device, which includes the explicit 6-DOF (position and orientation in the global coordinates) and the implicit 6-DOF (acceleration and angular speed in the device-wise coordinates). Motion recognition is another key function of the universal motion-based control and contains two parts: motion gesture recognition and air-handwriting recognition. The interaction technique of each task is carefully designed to follow a consistent mental model and ensure the usability. The universal motion-based control achieves seamless integration of 2D and 3D interactions, motion gestures, and air-handwriting.
Motion recognition by itself is a challenging problem. For motion gesture recognition, we propose a normalization procedure to effectively address the large in-class motion variations among users. The main contribution is the investigation of the relative effectiveness of various feature dimensions (of tracking signals) for motion gesture recognition in both user-dependent and user-independent cases. For air-handwriting recognition, we first develop a strategy to model air-handwriting with basic elements of characters and ligatures. Then, we build word-based and letter-based decoding word networks for air-handwriting recognition. Moreover, we investigate the detection and recognition of air-fingerwriting as an extension to air-handwriting. To complete the evaluation of air-handwriting, we conduct usability study to support that air-handwriting is suitable for text input on a motion-based user interface.
|
3 |
Feature selection and hierarchical classifier design with applications to human motion recognitionFreeman, Cecille January 2014 (has links)
The performance of a classifier is affected by a number of factors including classifier type, the input features and the desired output. This thesis examines the impact of feature selection and classification problem division on classification accuracy and complexity.
Proper feature selection can reduce classifier size and improve classifier performance by minimizing the impact of noisy, redundant and correlated features. Noisy features can cause false association between the features and the classifier output. Redundant and correlated features increase classifier complexity without adding additional information.
Output selection or classification problem division describes the division of a large classification problem into a set of smaller problems. Problem division can improve accuracy by allocating more resources to more difficult class divisions and enabling the use of more specific feature sets for each sub-problem.
The first part of this thesis presents two methods for creating feature-selected hierarchical classifiers. The feature-selected hierarchical classification method jointly optimizes the features and classification tree-design using genetic algorithms. The multi-modal binary tree (MBT) method performs the class division and feature selection sequentially and tolerates misclassifications in the higher nodes of the tree. This yields a piecewise separation for classes that cannot be fully separated with a single classifier. Experiments show that the accuracy of MBT is comparable to other multi-class extensions, but with lower test time. Furthermore, the accuracy of MBT is significantly higher on multi-modal data sets.
The second part of this thesis focuses on input feature selection measures. A number of filter-based feature subset evaluation measures are evaluated with the goal of assessing their performance with respect to specific classifiers. Although there are many feature selection measures proposed in literature, it is unclear which feature selection measures are appropriate for use with different classifiers. Sixteen common filter-based measures are tested on 20 real and 20 artificial data sets, which are designed to probe for specific feature selection challenges. The strengths and weaknesses of each measure are discussed with respect to the specific feature selection challenges in the artificial data sets, correlation with classifier accuracy and their ability to identify known informative features.
The results indicate that the best filter measure is classifier-specific. K-nearest neighbours classifiers work well with subset-based RELIEF, correlation feature selection or conditional mutual information maximization, whereas Fisher's interclass separability criterion and conditional mutual information maximization work better for support vector machines.
Based on the results of the feature selection experiments, two new filter-based measures are proposed based on conditional mutual information maximization, which performs well but cannot identify dependent features in a set and does not include a check for correlated features. Both new measures explicitly check for dependent features and the second measure also includes a term to discount correlated features. Both measures correctly identify known informative features in the artificial data sets and correlate well with classifier accuracy.
The final part of this thesis examines the use of feature selection for time-series data by using feature selection to determine important individual time windows or key frames in the series. Time-series feature selection is used with the MBT algorithm to create classification trees for time-series data. The feature selected MBT algorithm is tested on two human motion recognition tasks: full-body human motion recognition from joint angle data and hand gesture recognition from electromyography data. Results indicate that the feature selected MBT is able to achieve high classification accuracy on the time-series data while maintaining a short test time.
|
4 |
Metric Learning via Linear Embeddings for Human Motion RecognitionKong, ByoungDoo 18 December 2020 (has links)
We consider the application of Few-Shot Learning (FSL) and dimensionality reduction to the problem of human motion recognition (HMR). The structure of human motion has unique characteristics such as its dynamic and high-dimensional nature. Recent research on human motion recognition uses deep neural networks with multiple layers. Most importantly, large datasets will need to be collected to use such networks to analyze human motion. This process is both time-consuming and expensive since a large motion capture database must be collected and labeled. Despite significant progress having been made in human motion recognition, state-of-the-art algorithms still misclassify actions because of characteristics such as the difficulty in obtaining large-scale leveled human motion datasets. To address these limitations, we use metric-based FSL methods that use small-size data in conjunction with dimensionality reduction. We also propose a modified dimensionality reduction scheme based on the preservation of secants tailored to arbitrary useful distances, such as the geodesic distance learned by ISOMAP. We provide multiple experimental results that demonstrate improvements in human motion classification.
|
5 |
Robot Motion and Task Learning with Error RecoveryChang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.
|
6 |
Robot Motion and Task Learning with Error RecoveryChang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.
|
7 |
Vision-based human motion description and recognitionKellokumpu, V.-P. (Vili-Petteri) 29 November 2011 (has links)
Abstract
This thesis investigates vision based description and recognition of human movements. Automated vision based human motion analysis is a fundamental technology for creating video based human computer interaction systems. Because of its wide range of potential applications, the topic has become an active area of research in the computer vision community.
This thesis proposes the use of low level description of dynamics for human movement description and recognition. Two groups of approaches are developed: first, texture based methods that extract dynamic features for human movement description, and second, a framework that considers ballistic dynamics for human movement segmentation and recognition.
Two texture based descriptions for human movement analysis are introduced. The first method uses the temporal templates as a preprocessing stage and extracts a motion description using local binary pattern texture features. This approach is then extended to a spatiotemporal space and a dynamic texture based method that uses local binary patterns from three orthogonal planes is proposed. The method needs no accurate segmentation of silhouettes, rather, it is designed to work on image data. The dynamic texture based description is also applied to gait recognition. The proposed descriptions have been experimentally validated on publicly available databases.
Psychological studies on human movement indicate that common movements such as reaching and striking are ballistic by nature. Based on the psychological observations this thesis considers the segmentation and recognition of ballistic movements using low level motion features. Experimental results on motion capture and video data show the effectiveness of the method. / Tiivistelmä
Tässä väitöskirjassa tutkitaan ihmisen liikkeen kuvaamista ja tunnistamista konenäkömenetelmillä. Ihmisen liikkeen automaattinen analyysi on keskeinen teknologia luotaessa videopohjaisia järjestelmiä ihmisen ja koneen vuorovaikutukseen. Laajojen sovellusmahdollisuuksiensa myötä aiheesta on tullut aktiivinen tutkimusalue konenäön tutkimuksen piirissä.
Väitöskirjassa tutkitaan matalan tason piirteiden käyttöä ihmisen liikkeen dynaamiikan kuvaamiseen ja tunnistamiseen. Työssä esitetään kaksi tekstuuripohjaista mentelmää ihmisen liikkeen kuvaamiseen ja viitekehys ballististen liikkeiden segmentointiin ja tunnistamiseen.
Työssä esitetään kaksi tekstuuripohjaista menetelmää ihmisen liikkeen analysointiin. Ensimmäinen menetelmä käyttää esikäsittelynä ajallisia kuvamalleja ja kuvaa mallit paikallisilla binäärikuvioilla. Menetelmä laajennetaan myös tila-aika-avaruuteen. Dynaamiseen tekstuuriin perustuva menetelmä irroittaa paikalliset binäärikuviot tila-aika-avaruuden kolmelta ortogonaaliselta tasolta. Menetelmä ei vaadi ihmisen siluetin tarkkaa segmentointia kuvista, koska se on suunniteltu toimimaan suoraan kuvatiedon perusteella. Dynaamiseen tekstuuriin pohjautuvaa menetelmää sovelletaan myös henkilön tunnistamiseen kävelytyylin perusteella. Esitetyt menetelmät on kokeellisesti vahvistettu yleisesti käytetyillä ja julkisesti saatavilla olevilla tietokannoilla.
Psykologiset tutkimukset ihmisen liikkumisesta osoittavat, että yleiset liikkeet, kuten kurkoittaminen ja iskeminen, ovat luonteeltaan ballistisia. Tässä työssä tarkastellaan ihmisen liikkeen ajallista segmentointia ja tunnistamista matalan tason liikepiirteistä hyödyntäen psykologisia havaintoja. Kokeelliset tulokset liikkeenkaappaus ja video aineistolla osoittavat menetelmän toimivan hyvin.
|
8 |
Computer Vision in Fitness: Exercise Recognition and Repetition Counting / Datorseende i fitness: Träningsigenkänning och upprepningsräkningBarysheva, Anna January 2022 (has links)
Motion classification and action localization have rapidly become essential tasks in computer vision and video analytics. In particular, Human Action Recognition (HAR), which has important applications in clinical assessments, activity monitoring, and sports performance evaluation, has drawn a lot of attention in research communities. Nevertheless, the high-dimensional and time-continuous nature of motion data creates non-trivial challenges in action detection and action recognition. In this degree project, on a set of recorded unannotated mixed workouts, we test and evaluate unsupervised and semi-supervised machine learning models to identify the correct location, i.e., a timestamp, of various exercises in videos and to study different approaches in clustering detected actions. This is done by modelling the data via the two-step clustering pipeline using the Bag-of-Visual-Words (BoVW) approach. Moreover, the concept of repetition counting is under consideration as a parallel task. We find that clustering alone tends to produce cluster solutions with a mixture of exercises and is not sufficient to solve the exercise recognition problem. Instead, we use clustering as an initial step to aggregate similar exercises. This allows us to effectively find many repetitions of similar exercises for their further annotation. When combined with a subsequent Support Vector Machine (SVM) classifier, the BoVW concept proved itself, achieving an accuracy score of 95.5% on the labelled subset. Much attention has also been paid to various methods of dimensionality reduction and benchmarking their ability to encode the original data into a lower-dimensional latent space. / Rörelseklassificering och handlingslokalisering har snabbt blivit viktiga uppgifter inom datorseende och videoanalys. I synnerhet har HAR fångat en stor uppmärksamhet i forskarsamhällen, då den har viktiga tillämpningar i kliniska bedömningar, aktivitetsövervakning och utvärdering av sportprestanda.Likväl så skapar den högdimensionella och tidskontinuerliga naturen hos rörelsedata icke-triviala utmaningar i handlingsdetektering och handlingsigenkänning. I detta examensarbete testar vi samt utvärderar oövervakade och semi-övervarakde maskininlärningsmodeller på en samling av inspelade blandade träningspass, som inte är noterade. Detta är för att identifiera den korrekta positionen, d.v.s en tidsstämpel, för olika övningar i videofilmer och för att studera olika tillvägagångssätt för att gruppera upptäckta handlingar. Detta görs genom att modellera data via tvåstegs klustringspipeline, med tillämpning av BoVW-metoden. Som en parallell uppgift övervägs även repetitionsräkning som koncept. Vi finner att kluster enbart tenderar att producera klusterlösningar med en blandning av övningar och är därför inte tillräckligt för att lösa problemet med övningsigenkänning. Istället, använder vi klustring som ett första steg för att sammanställa liknande övningar. Detta gör att vi effektivt kan hitta många upprepningar av liknande övningar för att vidare hantera dess anteckningar. Detta, kombinerad med en efterföljande SVM-klassificerare, visade sig att BoVWkonceptet är mycket effektivt, vilket uppnådde en noggrannhet på 95, 5% på den märkta delmängden. Mycket uppmärksamhet har också ägnats åt olika metoder för dimensionalitetsreduktion och jämförelse av dessa metoders förmåga att koda originaldata till ett dimensionellt lägre latentutrymme.
|
Page generated in 0.0986 seconds