Spelling suggestions: "subject:"human motion"" "subject:"suman motion""
41 |
Metric Learning via Linear Embeddings for Human Motion RecognitionKong, ByoungDoo 18 December 2020 (has links)
We consider the application of Few-Shot Learning (FSL) and dimensionality reduction to the problem of human motion recognition (HMR). The structure of human motion has unique characteristics such as its dynamic and high-dimensional nature. Recent research on human motion recognition uses deep neural networks with multiple layers. Most importantly, large datasets will need to be collected to use such networks to analyze human motion. This process is both time-consuming and expensive since a large motion capture database must be collected and labeled. Despite significant progress having been made in human motion recognition, state-of-the-art algorithms still misclassify actions because of characteristics such as the difficulty in obtaining large-scale leveled human motion datasets. To address these limitations, we use metric-based FSL methods that use small-size data in conjunction with dimensionality reduction. We also propose a modified dimensionality reduction scheme based on the preservation of secants tailored to arbitrary useful distances, such as the geodesic distance learned by ISOMAP. We provide multiple experimental results that demonstrate improvements in human motion classification.
|
42 |
Achieving Practical Functional Electrical Stimulation-Driven Reaching Motions in an Individual with TetraplegiaWolf, Derek N. 10 December 2020 (has links)
No description available.
|
43 |
Multimodal Machine Learning in Human Motion AnalysisFu, Jia January 2022 (has links)
Currently, most long-term human motion classification and prediction tasks are driven by spatio-temporal data of the human trunk. In addition, data with multiple modalities can change idiosyncratically with human motion, such as electromyography (EMG) of specific muscles and respiratory rhythm. On the other hand, progress in Artificial Intelligence research on the collaborative understanding of image, video, audio, and semantics mainly relies on MultiModal Machine Learning (MMML). This work explores human motion classification strategies with multi-modality information using MMML. The research is conducted using the Unige-Maastricht Dance dataset. Attention-based Deep Learning architectures are proposed for modal fusion on three levels: 1) feature fusion by Component Attention Network (CANet); 2) model fusion by fusing Graph Convolution Network (GCN) with CANet innovatively; 3) and late fusion by a simple voting. These all successfully exceed the benchmark of single motion modality. Moreover, the effect of each modality in each fusion method is analyzed by comprehensive comparison experiments. Finally, statistical analysis and visualization of the attention scores are performed to assist the distillation of the most informative temporal/component cues characterizing two qualities of motion. / För närvarande drivs uppgifter som långsiktig klassificering och förutsägelse av mänskliga rörelser av spatiotemporala data från människans bål. Dessutom kan data från flera olika modaliteter förändras idiosynkratiskt med mänsklig rörelse, t.ex. elektromyografi (EMG) av specifika muskler och andningsrytm. Å andra sidan bygger forskning inom artificiell intelligens för samtidig förståelse av bild, video, ljud och semantik huvudsakligen på multimodal maskininlärning (MMML). I det här arbetet undersöks strategier för klassificering av mänskliga rörelser med multimodal information med hjälp av MMML. Forskningen utförs med hjälp av Unige-Maastricht Dance dataset. Uppmärksamhetsbaserade djupinlärningsarkitekturer föreslås för modal fusion på tre nivåer: 1) funktionsfusion genom Component Attention Network (CANet), 2) modellfusion genom en innovativ fusion av Graph Convolution Network (GCN) med CANet, 3) och sen fusion genom en enkel omröstning. Alla dessa överträffar riktmärket med en enda rörelsemodalitet. Dessutom analyseras effekten av varje modalitet i varje fusionsmetod genom omfattande jämförelseexperiment. Slutligen genomförs en statistisk analys och visualiseras av uppmärksamhetsvärdena för att hjälpa till att hitta de mest informativa temporala signaler eller komponentsignaler som kännetecknar två typer av rörelse.
|
44 |
Využití kompenzačních cvičení u mladých hokejistů / Using of compensation exercise in young ice hockey playersŠmíd, Tomáš January 2013 (has links)
Title: Using of compensation exercise in young ice hockey players Objectives: The main objective is to highlight the lack of compensatory exercise during the training process of junior ice hockey players and suggest improvements to this condition. Methods: Supporting documents were obtained from certified sources and quality of literature or web sites relating to information about ice hockey, the system of human motion, sports training and compensatory exercises. The practice was acquired by consultation, analysis, observation and measurement. Results: Analysis of the training plan of young adolescents HC Star Prague players found significant training load without adequate compensation. The proposed solution was inclusion training unit based on some of compensatory movements. To evaluate the effect of the proposal carried out a series of measurements of selected muscles shorten players. Keywords: ice hockey, youth, the systém of human motion, sports training, compensatory exercises, training unit, load
|
45 |
Deep learning for human motion analysis / Apprentissage automatique de représentations profondes pour l’analyse du mouvement humainNeverova, Natalia 08 April 2016 (has links)
L'objectif de ce travail est de développer des méthodes avancées d'apprentissage pour l’analyse et l'interprétation automatique du mouvement humain à partir de sources d'information diverses, telles que les images, les vidéos, les cartes de profondeur, les données de type “MoCap” (capture de mouvement), les signaux audio et les données issues de capteurs inertiels. A cet effet, nous proposons plusieurs modèles neuronaux et des algorithmes d’entrainement associés pour l’apprentissage supervisé et semi-supervisé de caractéristiques. Nous proposons des approches de modélisation des dépendances temporelles, et nous montrons leur efficacité sur un ensemble de tâches fondamentales, comprenant la détection, la classification, l’estimation de paramètres et la vérification des utilisateurs (la biométrie). En explorant différentes stratégies de fusion, nous montrons que la fusion des modalités à plusieurs échelles spatiales et temporelles conduit à une augmentation significative des taux de reconnaissance, ce qui permet au modèle de compenser les erreurs des classifieurs individuels et le bruit dans les différents canaux. En outre, la technique proposée assure la robustesse du classifieur face à la perte éventuelle d’un ou de plusieurs canaux. Dans un deuxième temps nous abordons le problème de l’estimation de la posture de la main en présentant une nouvelle méthode de régression à partir d’images de profondeur. Dernièrement, dans le cadre d’un projet séparé (mais lié thématiquement), nous explorons des modèles temporels pour l'authentification automatique des utilisateurs de smartphones à partir de leurs habitudes de tenir, de bouger et de déplacer leurs téléphones. Dans ce contexte, les données sont acquises par des capteurs inertiels embraqués dans les appareils mobiles. / The research goal of this work is to develop learning methods advancing automatic analysis and interpreting of human motion from different perspectives and based on various sources of information, such as images, video, depth, mocap data, audio and inertial sensors. For this purpose, we propose a several deep neural models and associated training algorithms for supervised classification and semi-supervised feature learning, as well as modelling of temporal dependencies, and show their efficiency on a set of fundamental tasks, including detection, classification, parameter estimation and user verification. First, we present a method for human action and gesture spotting and classification based on multi-scale and multi-modal deep learning from visual signals (such as video, depth and mocap data). Key to our technique is a training strategy which exploits, first, careful initialization of individual modalities and, second, gradual fusion involving random dropping of separate channels (dubbed ModDrop) for learning cross-modality correlations while preserving uniqueness of each modality-specific representation. Moving forward, from 1 to N mapping to continuous evaluation of gesture parameters, we address the problem of hand pose estimation and present a new method for regression on depth images, based on semi-supervised learning using convolutional deep neural networks, where raw depth data is fused with an intermediate representation in the form of a segmentation of the hand into parts. In separate but related work, we explore convolutional temporal models for human authentication based on their motion patterns. In this project, the data is captured by inertial sensors (such as accelerometers and gyroscopes) built in mobile devices. We propose an optimized shift-invariant dense convolutional mechanism and incorporate the discriminatively-trained dynamic features in a probabilistic generative framework taking into account temporal characteristics. Our results demonstrate, that human kinematics convey important information about user identity and can serve as a valuable component of multi-modal authentication systems.
|
46 |
Reconhecimento de movimentos humanos para imitação e controle de um robô humanoide / Recognition of human motions for imitation and control of a humanoid robotCavalcante, Fernando Zuher Mohamad Said 24 August 2012 (has links)
Em interações humano-robô ainda existem muitas limitações a serem superadas referentes à provisão de uma comunicação natural quanto aos sentidos humanos. A capacidade de interagir com os seres humanos de maneira natural em contextos sociais (pelo uso da fala, gestos, expressões faciais, movimentos do corpo) é um ponto fundamental para garantir a aceitação de robôs em uma sociedade de pessoas não especialistas em manipulação de engenhos robóticos. Outrossim, a maioria dos robôs existentes possui habilidades limitadas de percepção, cognição e comportamento em comparação com seres humanos. Nesse contexto, este projeto de pesquisa investigou o potencial da arquitetura robótica do humanoide NAO, no tocante à capacidade de realizar interações com seres humanos através de imitação de movimentos do corpo de uma pessoa e pelo controle do robô. Quanto a sensores, foi utilizado um sensor câmera não-intrusivo de profundidade incorporado no dispositivo Kinect. Quanto às técnicas, alguns conceitos matemáticos foram abordados para abstração das configurações espaciais de algumas junções/membros do corpo humano essas configurações foram capturadas por meio da utilização da biblioteca OpenNI. Os experimentos realizados versaram sobre a imitação e o controle do robô por meio da avaliação de vários usuários. Os resultados desses experimentos revelaram um desempenho satisfatório quanto ao sistema desenvolvido / In human-robot interactions there are still many restrictions to overcome regarding the provision of a communication as natural to the human senses. The ability to interact with humans in a natural way in social contexts (the use of speech, gestures, facial expressions, body movements) is a key point to ensure the acceptance of robots in a society of people not specialized in manipulation of robotic devices. Moreover, most existing robots have limited abilities of perception, cognition and behavior in comparison with humans. In this context, this research project investigated the potential of the robotic architecture of the NAO humanoid robot, in terms of ability to perform interactions with humans through imitation of body movements of a person and the robot control. As for sensors, we used a non-intrusive sensor depth-camera built into the device Kinect. As to techniques, some mathematical concepts were discussed for abstraction of the spatial configurations of some joints/members of the human body these configurations were captured through the use of the OpenNI library. The performed experiments were about imitation and the control of the robot through the evaluation of various users. The results of these experiments showed a satisfactory performance for the developed system
|
47 |
Estimating Human Limb Motion Using Skin Texture and Particle FilteringHolmberg, Björn January 2008 (has links)
Estimating human motion is the topic of this thesis. We are interested in accurately estimating the motion of a human body using only video images capturing the subject in motion. Video images from up to two cameras are considered. The first main topic of the thesis is to investigate a new type of input data. This data consists of some sort of texture. This texture can be added to the human body segment under study or it can be the actual texture of the skin. In paper I we investigate if added texture together with the use of a two camera system can provide enough information to make it possible to estimate the knee joint center location. Evaluation is made using a marker based system that is run in parallel to the two camera video system. The results from this investigation show promise for the use of texture. The marker and texture based estimates differ in absolute values but the variations are similar indicating that texture is in fact usable for this purpose. In paper II and III we investigate further the usability in images of skin texture as input for motion estimation. Paper II approaches the problem of estimating human limb motion in the image plane. An image histogram based mutual information criterion is used to decide if an extracted image patch from frame k is a good match to some location in frame k+1. Eval- uation is again performed using a marker based system synchronized to the video stream. The results are very promising for the application of skin texture based motion estimation in 2D. In paper III, basically the same approach is taken as in paper II with the substantial difference that here estimation of three dimensional motion is addressed. Two video cameras are used and the image patch matching is performed both between cameras (inter-camera) in frame k and also in each cameras images (intra-camera) for frame k to k+1. The inter-camera matches yield triangulated three dimensional estimates on the approximate surface of the skin. The intra-camera matches provide a way to connect the three dimensional points between frame k and k+1 The resulting one step three dimensional trajectories are then used to estimate rigid body motion using least squares methods. The results show that there is still some work to be done before this texture based method can be an alternative to the marker based methods. In paper IV the second main topic of the thesis is discussed. Here we present an investigation in using model based techniques for the purpose of estimating human motion. A kinematic model of the thigh and shank segments are built with an anatomic model of the knee. Using this model, the popular particle filter and typical simulated data from the triangulation in paper III, an estimate of the motion variables in the thigh and shank segment can be achieved. This also includes one static model parameter used to describe the knee model. The results from this investigation show good promise for the use of triangulated skin texture as input to such a model based approach.
|
48 |
Radar simulation of human activities in non line-of-sight environmentsSundar Ram, Shobha, 1982- 13 August 2012 (has links)
The capability to detect, track and monitor human activities behind building walls and other non-line-of-sight environments is an important component of security and surveillance operations. Over the years, both ultrawideband and Doppler based radar techniques have been researched and developed for tracking humans behind walls. In particular, Doppler radars capture some interesting features of the human radar returns called microDopplers that arise from the dynamic movements of the different body parts. All the current research efforts have focused on building hardware sensors with very specific capabilities. This dissertation focuses on developing a physics based Doppler radar simulator to generate the dynamic signatures of complex human motions in nonline-of-sight environments. The simulation model incorporates dynamic human motion, electromagnetic scattering mechanisms, channel propagation effects and radar sensor parameters. Detailed, feature-by-feature analyses of the resulting radar signatures are carried out to enhance our fundamental understanding of human sensing using radar. First, a methodology for simulating the radar returns from complex human motions in free space is presented. For this purpose, computer animation data from motion capture technologies are exploited to describe the human movements. Next, a fast, simple, primitive-based electromagnetic model is used to simulate the human body. The microDopplers of several human motions such as walking, running, crawling and jumping are generated by integrating the animation models of humans with the electromagnetic model of the human body. Next, a methodology for generating the microDoppler radar signatures of humans moving behind walls is presented. This involves combining wall propagation functions derived from the finite-difference time-domain (FDTD) simulation with the free space radar simulations of humans. The resulting hybrid simulator of the human and wall is used to investigate the effects of both homogeneous and inhomogeneous walls on human microDopplers. The results are further corroborated by basic point-scatterer analysis of different wall effects. The wall studies are followed by an analysis of the effects of flat grounds on human radar signatures. The ground effect is modeled using the method of images and a ground reflection coefficient. A suitable Doppler radar testbed is developed in the laboratory for simulation validation. Measured data of different human activities are collected in both line-of-sight and through-wall environments and the resulting microDoppler signatures are compared with the simulation results. The human microDopplers are best observed in the joint timefrequency space. Hence, suitable joint time-frequency transforms are investigated for improving the display and the readability of both simulated and measured spectrograms. Finally, two new Doppler radar paradigms are considered. First, a scenario is considered where multiple, spatially distributed Doppler radars are used to measure the microDopplers of a moving human from different viewing angles. The possibility of using these microDoppler data for estimating the positions of different point scatterers on the human body is investigated. Second, a scenario is considered where multiple Doppler radars are collocated in a two-dimensional (2-D) array configuration. The possibility of generating frontal images of human movements using joint Doppler and 2-D spatial beamforming is considered. The performance of this concept is compared with that of conventional 2-D array processing without Doppler processing. / text
|
49 |
Bring Your Body into Action : Body Gesture Detection, Tracking, and Analysis for Natural InteractionAbedan Kondori, Farid January 2014 (has links)
Due to the large influx of computers in our daily lives, human-computer interaction has become crucially important. For a long time, focusing on what users need has been critical for designing interaction methods. However, new perspective tends to extend this attitude to encompass how human desires, interests, and ambitions can be met and supported. This implies that the way we interact with computers should be revisited. Centralizing human values rather than user needs is of the utmost importance for providing new interaction techniques. These values drive our decisions and actions, and are essential to what makes us human. This motivated us to introduce new interaction methods that will support human values, particularly human well-being. The aim of this thesis is to design new interaction methods that will empower human to have a healthy, intuitive, and pleasurable interaction with tomorrow’s digital world. In order to achieve this aim, this research is concerned with developing theories and techniques for exploring interaction methods beyond keyboard and mouse, utilizing human body. Therefore, this thesis addresses a very fundamental problem, human motion analysis. Technical contributions of this thesis introduce computer vision-based, marker-less systems to estimate and analyze body motion. The main focus of this research work is on head and hand motion analysis due to the fact that they are the most frequently used body parts for interacting with computers. This thesis gives an insight into the technical challenges and provides new perspectives and robust techniques for solving the problem.
|
50 |
Reconhecimento de movimentos humanos para imitação e controle de um robô humanoide / Recognition of human motions for imitation and control of a humanoid robotFernando Zuher Mohamad Said Cavalcante 24 August 2012 (has links)
Em interações humano-robô ainda existem muitas limitações a serem superadas referentes à provisão de uma comunicação natural quanto aos sentidos humanos. A capacidade de interagir com os seres humanos de maneira natural em contextos sociais (pelo uso da fala, gestos, expressões faciais, movimentos do corpo) é um ponto fundamental para garantir a aceitação de robôs em uma sociedade de pessoas não especialistas em manipulação de engenhos robóticos. Outrossim, a maioria dos robôs existentes possui habilidades limitadas de percepção, cognição e comportamento em comparação com seres humanos. Nesse contexto, este projeto de pesquisa investigou o potencial da arquitetura robótica do humanoide NAO, no tocante à capacidade de realizar interações com seres humanos através de imitação de movimentos do corpo de uma pessoa e pelo controle do robô. Quanto a sensores, foi utilizado um sensor câmera não-intrusivo de profundidade incorporado no dispositivo Kinect. Quanto às técnicas, alguns conceitos matemáticos foram abordados para abstração das configurações espaciais de algumas junções/membros do corpo humano essas configurações foram capturadas por meio da utilização da biblioteca OpenNI. Os experimentos realizados versaram sobre a imitação e o controle do robô por meio da avaliação de vários usuários. Os resultados desses experimentos revelaram um desempenho satisfatório quanto ao sistema desenvolvido / In human-robot interactions there are still many restrictions to overcome regarding the provision of a communication as natural to the human senses. The ability to interact with humans in a natural way in social contexts (the use of speech, gestures, facial expressions, body movements) is a key point to ensure the acceptance of robots in a society of people not specialized in manipulation of robotic devices. Moreover, most existing robots have limited abilities of perception, cognition and behavior in comparison with humans. In this context, this research project investigated the potential of the robotic architecture of the NAO humanoid robot, in terms of ability to perform interactions with humans through imitation of body movements of a person and the robot control. As for sensors, we used a non-intrusive sensor depth-camera built into the device Kinect. As to techniques, some mathematical concepts were discussed for abstraction of the spatial configurations of some joints/members of the human body these configurations were captured through the use of the OpenNI library. The performed experiments were about imitation and the control of the robot through the evaluation of various users. The results of these experiments showed a satisfactory performance for the developed system
|
Page generated in 0.0981 seconds