• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 68
  • 25
  • 25
  • 17
  • 17
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Human Activity Recognition and Pathological Gait Pattern Identification

Niu, Feng 14 December 2007 (has links)
Human activity analysis has attracted great interest from computer vision researchers due to its promising applications in many areas such as automated visual surveillance, computer-human interactions, and motion-based identification and diagnosis. This dissertation presents work in two areas: general human activity recognition from video, and human activity analysis for the purpose of identifying pathological gait from both 3D captured data and from video. Even though the research in human activity recognition has been going on for many years, still there are many issues that need more research. This includes the effective representation and modeling of human activities and the segmentation of sequences of continuous activities. In this thesis we present an algorithm that combines shape and motion features to represent human activities. In order to handle the activity recognition from any viewing angle we quantize the viewing direction and build a set of Hidden Markov Models (HMMs), where each model represents the activity from a given view. Finally, a voting based algorithm is used to segment and recognize a sequence of human activities from video. Our method of representing activities has good attributes and is suitable for both low resolution and high resolution video. The voting based algorithm performs the segmentation and recognition simultaneously. Experiments on two sets of video clips of different activities show that our method is effective. Our work on identifying pathological gait is based on the assumption of gait symmetry. Previous work on gait analysis measures the symmetry of gait based on Ground Reaction Force data, stance time, swing time or step length. Since the trajectories of the body parts contain information about the whole body movement, we measure the symmetry of the gait based on the trajectories of the body parts. Two algorithms, which can work with different data sources, are presented. The first algorithm works on 3D motion-captured data and the second works on video data. Both algorithms use support vector machine (SVM) for classification. Each of the two methods has three steps: the first step is data preparation, i.e., obtaining the trajectories of the body parts; the second step is gait representation based on a measure of gait symmetry; and the last step is SVM based classification. For 3D motion-captured data, a set of features based on Discrete Fourier Transform (DFT) is used to represent the gait. We demonstrate the accuracy of the classification by a set of experiments that shows that the method for 3D motion-captured data is highly effective. For video data, a model based tracking algorithm for human body parts is developed for preparing the data. Then, a symmetry measure that works on the sequence of 2D data, i.e. sequence of video frames, is derived to represent the gait. We performed experiments on both 2D projected data and real video data to examine this algorithm. The experimental results on 2D projected data showed that the presented algorithm is promising for identifying pathological gait from video. The experimental results on the real video data are not good as the results on 2D projected data. We believe that better results could be obtained if the accuracy of the tracking algorithm is improved.
22

Recognizing human activities from low-resolution videos

Chen, Chia-Chih, 1979- 01 February 2012 (has links)
Human activity recognition is one of the intensively studied areas in computer vision. Most existing works do not assume video resolution to be a problem due to general applications of interests. However, with continuous concerns about global security and emerging needs for intelligent video analysis tools, activity recognition from low-resolution and low-quality videos has become a crucial topic for further research. In this dissertation, We present a series of approaches which are developed specifically to address the related issues regarding low-level image preprocessing, single person activity recognition, and human-vehicle interaction reasoning from low-resolution surveillance videos. Human cast shadows are one of the major issues which adversely effect the performance of an activity recognition system. This is because human shadow direction varies depending on the time of the day and the date of the year. To better resolve this problem, we propose a shadow removal technique which effectively eliminates a human shadow cast from a light source of unknown direction. A multi-cue shadow descriptor is employed to characterize the distinctive properties of shadows. Our approach detects, segments, and then removes shadows. We propose two different methods to recognize single person actions and activities from low-resolution surveillance videos. The first approach adopts a joint feature histogram based representation, which is the concatenation of subspace projected gradient and optical flow features in time. However, in this problem, the use of low-resolution, coarse, pixel-level features alone limits the recognition accuracy. Therefore, in the second work, we contributed a novel mid-level descriptor, which converts an activity sequence into simultaneous temporal signals at body parts. With our representation, activities are recognized through both the local video content and the short-time spectral properties of body parts' movements. We draw the analogies between activity and speech recognition and show that our speech-like representation and recognition scheme improves recognition performance in several low-resolution datasets. To complete the research on this subject, we also tackle the challenging problem of recognizing human-vehicle interactions from low-resolution aerial videos. We present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle spatial relationships with the pre-specified event definitions in a piecewise fashion. Our framework can be generalized to recognize any type of human-vehicle interaction from aerial videos. / text
23

E-shape analysis

Sroufe, Paul. Dantu, Ram, January 2009 (has links)
Thesis (M.S.)--University of North Texas, Dec., 2009. / Title from title page display. Includes bibliographical references.
24

Locati[o]n-based activity recognition /

Liao, Lin. January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (p. 123-132).
25

Εφαρμογή ασύρματου δικτύου για την αντιμετώπιση έκτακτης ανάγκης

Κολιόπουλος, Κυριάκος-Άρης 15 April 2013 (has links)
Αντικείμενο της εργασίας αυτής είναι η μελέτη και κατασκευή εφαρμογής ασύρματου δικτύου με σκοπό την αναγνώριση της ανθρώπινης δραστηριότητας και τον εντοπισμό της πτώσης σε πραγματικό χρόνο, καθώς επίσης και την παρακολούθηση των αποτελεσμάτων από απομακρυσμένη τοποθεσία. Στη συγκεκριμένη εργασία σκοπός είναι η αναγνώριση των τεσσάρων βασικών καταστάσεων της ανθρώπινης φυσικής δραστηριότητας (κάθομαι, ξαπλώνω, στέκομαι, κινούμαι) και ο εντοπισμός της πτώσης με χρήση της των επιταχυνσιομέτρων που προσφέρει η πλατφόρμα SunSpot καθώς και η σύνδεση της διάταξης με το διαδίκτυο για την παροχή πληροφορίας σχετικά με την κατάσταση του κατόχου του συστήματος σε απομακρυσμένη τοποθεσία. Πραγματοποιήθηκε μελέτη σχετικά με διάφορες διατάξεις των αισθητήρων ,την συχνότητα δειγματοληψίας, τους αλγορίθμους κατάταξης καθώς και για τις μεθόδους διάθεσης της πληροφορίας στο διαδίκτυο. Για την αναγνώριση των καταστάσεων και τον εντοπισμό της πτώσης χρησιμοποιήθηκαν δυο πλατφόρμες αισθητήρων SunSPOT, μια στο στήθος (master) και μια στο δεξιό τετρακέφαλο (slave) / A wearable wireless sensor network application performing human activity recognition and fall detection using the Naïve Bayesian Classifier algorithm in the SunSpot Platform accompanied by a web application in the Google App Engine platform to be able to monitor the classification results from a remote location and to automatically notify via e-mail in case of emergency.
26

Reconnaissance en-ligne d'actions 3D par l'analyse des trajectoires du squelette humain / Online 3D actions recognition by analyzing the trajectories of human's skeleton

Boulahia, Said Yacine 11 July 2018 (has links)
L'objectif de cette thèse est de concevoir une approche transparente originale apte à détecter en temps-réel l'occurrence d'une action, dans un flot non segmenté et idéalement le plus tôt possible. Ces travaux s'inscrivent dans une collaboration entre deux équipes de l'IRISA-lnria de Rennes, à savoir lntuidoc et MimeTIC. En profitant de la complémentarité des savoir-faire des deux équipes de recherche, nous proposons de reconsidérer les besoins et les difficultés rencontrées pour modéliser, reconnaître et détecter une action 30 en proposant de nouvelles solutions à la lumière des avancées réalisées en termes de modélisation de gestes manuscrits 20. Les contributions de cette thèse sont regroupées en trois parties principales. Dans la première partie, nous proposons une nouvelle approche pour modéliser et reconnaître une action pré­segmentée. Dans la deuxième partie, nous introduisons une approche permettant de reconnaître une action dans un flot non segmenté. Enfin, dans la troisième partie, nous étendons cette dernière approche pour la caractérisation précoce d'une action avec très peu de d'information. Pour chacune de ces trois problématiques, nous avons identifié explicitement les difficultés à considérer afin d'en effectuer une description complète pour permettre de concevoir des solutions ciblées pour chacune d'elles. Les résultats expérimentaux obtenus sur différents benchmarks d'actions attestent de la validité de notre démarche. En outre, à travers des coopérations ayant eu lieu au cours de la thèse, les approches développées ont été déployées dans trois applications, dont des applications en animation et en reconnaissance de gestes dynamiques de la main. / The objective of this thesis is to design an original transparent approach able to detect in real time the occurrence of an action, in an unsegmented flow and ideally as early as possible. This work is part of a collaboration between two IRISA-lnria teams in Rennes, namely lntuidoc and Mime TIC. By taking advantage of the complementary expertise of the two research teams, we propose to reconsider the needs and difficulties encountered to model, recognize and detect a 30 action by proposing new solutions in the light of the advances made in terms of 20 handwriting modeling. The contributions of this thesis are grouped into three main parts. In the first part, we propose a new approach to model and recognize a pre-segmented action. Indeed, it is first necessary to develop a representation able to characterize as finely as possible a given action to facilitate recognition. In the second part, we introduce an approach to recognize an action in an unsegmented flow. Finally, in the third part, we extend this last approach for the early characterization of an action with very little information. For each of these three issues, we have explicitly identified the difficulties to be considered in order to make a complete description of them so that we can design targeted solutions for each of them. The experimental results obtained on different benchmarks of actions attest to the validity of our approach. In addition, through collaborations that took place during the thesis, the developed approaches were deployed in three applications, including applications in animation and in dynamic hand gestures recognition.
27

Real Time Estimation and Prediction of Similarity in Human Activity Using Factor Oracle Algorithm

January 2016 (has links)
abstract: The human motion is defined as an amalgamation of several physical traits such as bipedal locomotion, posture and manual dexterity, and mental expectation. In addition to the “positive” body form defined by these traits, casting light on the body produces a “negative” of the body: its shadow. We often interchangeably use with silhouettes in the place of shadow to emphasize indifference to interior features. In a manner of speaking, the shadow is an alter ego that imitates the individual. The principal value of shadow is its non-invasive behaviour of reflecting precisely the actions of the individual it is attached to. Nonetheless we can still think of the body’s shadow not as the body but its alter ego. Based on this premise, my thesis creates an experiential system that extracts the data related to the contour of your human shape and gives it a texture and life of its own, so as to emulate your movements and postures, and to be your extension. In technical terms, my thesis extracts abstraction from a pre-indexed database that could be generated from an offline data set or in real time to complement these actions of a user in front of a low-cost optical motion capture device like the Microsoft Kinect. This notion could be the system’s interpretation of the action which creates modularized art through the abstraction’s ‘similarity’ to the live action. Through my research, I have developed a stable system that tackles various connotations associated with shadows and the need to determine the ideal features that contribute to the relevance of the actions performed. The implication of Factor Oracle [3] pattern interpretation is tested with a feature bin of videos. The system also is flexible towards several methods of Nearest Neighbours searches and a machine learning module to derive the same output. The overall purpose is to establish this in real time and provide a constant feedback to the user. This can be expanded to handle larger dynamic data. In addition to estimating human actions, my thesis best tries to test various Nearest Neighbour search methods in real time depending upon the data stream. This provides a basis to understand varying parameters that complement human activity recognition and feature matching in real time. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2016
28

Deep Learning Action Anticipation for Real-time Control of Water Valves: Wudu use case

Felemban, Abdulwahab A. 12 1900 (has links)
Human-machine interaction could support many daily activities in making it more convenient. The development of smart devices has flourished the underlying smart systems that process smart and personalized control of devices. The first step in controlling any device is observation; through understanding the surrounding environment and human activity, a smart system can physically control a device. Human activity recognition (HAR) is essential in many smart applications such as self-driving cars, human-robot interaction, and automatic systems such as infrared (IR) taps. For human-centric systems, there are some requirements to perform a physical task in real-time. For human-machine interactions, the anticipation of human actions is essential. IR taps have delay limitations because of the proximity sensor that signals the solenoid valve only when the user’s hands are exactly below the tap. The hardware and electronics delay causes inconvenience in use and water waste. In this thesis, an alternative control based on deep learning action anticipation is proposed. Humans interact with taps for various tasks such as washing hands, face, brushing teeth, just to name a few. We focus on a small subset of these activities. Specifically, we focus on the activities carried out sequentially during an Islamic cleansing ritual called Wudu. Skeleton modality is widely used in HAR because of having abstract information that is scale-invariant and robust against imagery variances. We used depth cameras to obtain accurate 3D human skeletons of users performing Wudu. The sequences were manually annotated with ten atomic action classes. This thesis investigated the use of different Deep Learning networks with architectures optimized for real-time action anticipation. The proposed methods were mainly based on the Spatial-Temporal Graph Convolutional Network. With further improvements, we proposed a Gated Recurrent Unit (GRU) model with Spatial-Temporal Graph Convolution Network (ST-GCN) backbone to extract local temporal features. The GRU process the local temporal latent features sequentially to predict future actions. The proposed models scored 94.14% recall on binary classification to turn on and off the water tap. And higher than 81.58-89.08% recall on multiclass classification.
29

Spatio-temporal reasoning for semantic scene understanding and its application in recognition and prediction of manipulation actions in image sequences

Ziaeetabar, Fatemeh 07 May 2019 (has links)
No description available.
30

How can machine learning help identify cheating behaviours in physical activity-based mobile applications?

Kock, Elina, Sarwari, Yamma January 2020 (has links)
Den här studien undersöker möjligheten att använda sig utav Human Activity Recognition (HAR) i ett mobilspel, Bamblup, som använder sig utav fysiska rörelser för att upptäcka om en spelare fuskar eller om denne verkligen utför den verkliga aktiviteten. Sensordata från en accelerometer och ett gyroskop i en iPhone 7 användes för att samla data från olika människor som utförde ett antal aktiviteter utav intresse. Aktiviteterna som är utav intresse är hopp, knäböj, stampa och deras fuskmotsvarigheter, fuskhopp, fuskknäböj och fuskstampa. En sekventiell modell skapades med hjälp av det öppna programvarubiblioteket, TensorFlow. Feature Selection gjordes i programmet WEKA (Waikato Environment for Knowledge Analysis), för att välja ut attributen som var mest relevanta för klassificeringen. Dessa attribut användes för att träna modellen i TensorFlow, vilken gav en klassificeringsprecision på 66%. Fuskaktiviteterna klassificerades relativt bra, och det gjorde även stampaktiviteten. Hopp och knäböj hade lägst klassificeringsprecision med 21.43% respektive 28.57%. Dessutom testades Random Forest klassificeraren i WEKA på vårt dataset med 10-delad korsvalidering, vilket gav en klassifieringsnoggranhet på 90.47%. Våra resultat tyder på att maskininlärning är en stark kandidat för att hjälpa till att identifiera fuskbeteenden inom fysisk aktivitetsbaserade mobilspel. / This study investigates the possibility to use machine learning for Human Activity Recognition (HAR) in Bamblup, a physical activity-based game for smartphones, in order to detect whether a player is cheating or is indeed performing the required activity. Sensor data from an accelerometer and a gyroscope from an iPhone 7 was used to gather data from various people performing a set of activities. The activities of interest are jumping, squatting, stomping, and their cheating counterparts, fake jumping, fake squatting, and fake stomping. A Sequential model was created using the free open-source library TensorFlow. Feature Selection was performed using the program WEKA (Waikato Environment for Knowledge Analysis), to select the attributes which provided the most information gain. These attributes were subsequently used to train the model in TensorFlow, which gave a classification accuracy of 66%. The fake activities were classified relatively well, and so was the stomping activity. Jumping and squatting had the lowest accuracy of 21.43% and 28.57% respectively. Additionally, the Random Forest classifier in WEKA was tested on the dataset using 10-fold cross validation, providing a classification accuracy of 90.47%. Our findings imply that machine learning is a strong candidate for aiding in the detection of cheating behaviours in mobile physical activity-based games.

Page generated in 0.1242 seconds