• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 9
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 154
  • 154
  • 80
  • 64
  • 59
  • 43
  • 38
  • 25
  • 22
  • 20
  • 20
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Human Activity Recognition and Pathological Gait Pattern Identification

Niu, Feng 14 December 2007 (has links)
Human activity analysis has attracted great interest from computer vision researchers due to its promising applications in many areas such as automated visual surveillance, computer-human interactions, and motion-based identification and diagnosis. This dissertation presents work in two areas: general human activity recognition from video, and human activity analysis for the purpose of identifying pathological gait from both 3D captured data and from video. Even though the research in human activity recognition has been going on for many years, still there are many issues that need more research. This includes the effective representation and modeling of human activities and the segmentation of sequences of continuous activities. In this thesis we present an algorithm that combines shape and motion features to represent human activities. In order to handle the activity recognition from any viewing angle we quantize the viewing direction and build a set of Hidden Markov Models (HMMs), where each model represents the activity from a given view. Finally, a voting based algorithm is used to segment and recognize a sequence of human activities from video. Our method of representing activities has good attributes and is suitable for both low resolution and high resolution video. The voting based algorithm performs the segmentation and recognition simultaneously. Experiments on two sets of video clips of different activities show that our method is effective. Our work on identifying pathological gait is based on the assumption of gait symmetry. Previous work on gait analysis measures the symmetry of gait based on Ground Reaction Force data, stance time, swing time or step length. Since the trajectories of the body parts contain information about the whole body movement, we measure the symmetry of the gait based on the trajectories of the body parts. Two algorithms, which can work with different data sources, are presented. The first algorithm works on 3D motion-captured data and the second works on video data. Both algorithms use support vector machine (SVM) for classification. Each of the two methods has three steps: the first step is data preparation, i.e., obtaining the trajectories of the body parts; the second step is gait representation based on a measure of gait symmetry; and the last step is SVM based classification. For 3D motion-captured data, a set of features based on Discrete Fourier Transform (DFT) is used to represent the gait. We demonstrate the accuracy of the classification by a set of experiments that shows that the method for 3D motion-captured data is highly effective. For video data, a model based tracking algorithm for human body parts is developed for preparing the data. Then, a symmetry measure that works on the sequence of 2D data, i.e. sequence of video frames, is derived to represent the gait. We performed experiments on both 2D projected data and real video data to examine this algorithm. The experimental results on 2D projected data showed that the presented algorithm is promising for identifying pathological gait from video. The experimental results on the real video data are not good as the results on 2D projected data. We believe that better results could be obtained if the accuracy of the tracking algorithm is improved.
42

A wearable real-time system for physical activity recognition and fall detection

Yang, Xiuxin 23 September 2010
This thesis work designs and implements a wearable system to recognize physical activities and detect fall in real time. Recognizing peoples physical activity has a broad range of applications. These include helping people maintaining their energy balance by developing health assessment and intervention tools, investigating the links between common diseases and levels of physical activity, and providing feedback to motivate individuals to exercise. In addition, fall detection has become a hot research topic due to the increasing population over 65 throughout the world, as well as the serious effects and problems caused by fall.<p> In this work, the Sun SPOT wireless sensor system is used as the hardware platform to recognize physical activity and detect fall. The sensors with tri-axis accelerometers are used to collect acceleration data, which are further processed and extracted with useful information. The evaluation results from various algorithms indicate that Naive Bayes algorithm works better than other popular algorithms both in accuracy and implementation in this particular application.<p> This wearable system works in two modes: indoor and outdoor, depending on users demand. Naive Bayes classifier is successfully implemented in the Sun SPOT sensor. The results of evaluating sampling rate denote that 20 Hz is an optimal sampling frequency in this application. If only one sensor is available to recognize physical activity, the best location is attaching it to the thigh. If two sensors are available, the combination at the left thigh and the right thigh is the best option, 90.52% overall accuracy in the experiment.<p> For fall detection, a master sensor is attached to the chest, and a slave sensor is attached to the thigh to collect acceleration data. The results show that all falls are successfully detected. Forward, backward, leftward and rightward falls have been distinguished from standing and walking using the fall detection algorithm. Normal physical activities are not misclassified as fall, and there is no false alarm in fall detection while the user is wearing the system in daily life.
43

A wearable real-time system for physical activity recognition and fall detection

Yang, Xiuxin 23 September 2010 (has links)
This thesis work designs and implements a wearable system to recognize physical activities and detect fall in real time. Recognizing peoples physical activity has a broad range of applications. These include helping people maintaining their energy balance by developing health assessment and intervention tools, investigating the links between common diseases and levels of physical activity, and providing feedback to motivate individuals to exercise. In addition, fall detection has become a hot research topic due to the increasing population over 65 throughout the world, as well as the serious effects and problems caused by fall.<p> In this work, the Sun SPOT wireless sensor system is used as the hardware platform to recognize physical activity and detect fall. The sensors with tri-axis accelerometers are used to collect acceleration data, which are further processed and extracted with useful information. The evaluation results from various algorithms indicate that Naive Bayes algorithm works better than other popular algorithms both in accuracy and implementation in this particular application.<p> This wearable system works in two modes: indoor and outdoor, depending on users demand. Naive Bayes classifier is successfully implemented in the Sun SPOT sensor. The results of evaluating sampling rate denote that 20 Hz is an optimal sampling frequency in this application. If only one sensor is available to recognize physical activity, the best location is attaching it to the thigh. If two sensors are available, the combination at the left thigh and the right thigh is the best option, 90.52% overall accuracy in the experiment.<p> For fall detection, a master sensor is attached to the chest, and a slave sensor is attached to the thigh to collect acceleration data. The results show that all falls are successfully detected. Forward, backward, leftward and rightward falls have been distinguished from standing and walking using the fall detection algorithm. Normal physical activities are not misclassified as fall, and there is no false alarm in fall detection while the user is wearing the system in daily life.
44

HASC Challenge: Gathering Large Scale Human Activity Corpus for the Real-World Activity Understandings

Nishio, Nobuhiko, Sumi, Yasuyuki, Kawahara, Yoshihiro, Inoue, Sozo, Murao, Kazuya, Terada, Tsutomu, Kaji, Katsuhiko, Iwasaki, Yohei, Ogawa, Nobuhiro, Kawaguchi, Nobuo 12 March 2011 (has links)
Article No.27
45

Recognizing human activities from low-resolution videos

Chen, Chia-Chih, 1979- 01 February 2012 (has links)
Human activity recognition is one of the intensively studied areas in computer vision. Most existing works do not assume video resolution to be a problem due to general applications of interests. However, with continuous concerns about global security and emerging needs for intelligent video analysis tools, activity recognition from low-resolution and low-quality videos has become a crucial topic for further research. In this dissertation, We present a series of approaches which are developed specifically to address the related issues regarding low-level image preprocessing, single person activity recognition, and human-vehicle interaction reasoning from low-resolution surveillance videos. Human cast shadows are one of the major issues which adversely effect the performance of an activity recognition system. This is because human shadow direction varies depending on the time of the day and the date of the year. To better resolve this problem, we propose a shadow removal technique which effectively eliminates a human shadow cast from a light source of unknown direction. A multi-cue shadow descriptor is employed to characterize the distinctive properties of shadows. Our approach detects, segments, and then removes shadows. We propose two different methods to recognize single person actions and activities from low-resolution surveillance videos. The first approach adopts a joint feature histogram based representation, which is the concatenation of subspace projected gradient and optical flow features in time. However, in this problem, the use of low-resolution, coarse, pixel-level features alone limits the recognition accuracy. Therefore, in the second work, we contributed a novel mid-level descriptor, which converts an activity sequence into simultaneous temporal signals at body parts. With our representation, activities are recognized through both the local video content and the short-time spectral properties of body parts' movements. We draw the analogies between activity and speech recognition and show that our speech-like representation and recognition scheme improves recognition performance in several low-resolution datasets. To complete the research on this subject, we also tackle the challenging problem of recognizing human-vehicle interactions from low-resolution aerial videos. We present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle spatial relationships with the pre-specified event definitions in a piecewise fashion. Our framework can be generalized to recognize any type of human-vehicle interaction from aerial videos. / text
46

Recognizing human activity using RGBD data

Xia, Lu, active 21st century 03 July 2014 (has links)
Traditional computer vision algorithms try to understand the world using visible light cameras. However, there are inherent limitations of this type of data source. First, visible light images are sensitive to illumination changes and background clutter. Second, the 3D structural information of the scene is lost when projecting the 3D world to 2D images. Recovering the 3D information from 2D images is a challenging problem. Range sensors have existed for over thirty years, which capture 3D characteristics of the scene. However, earlier range sensors were either too expensive, difficult to use in human environments, slow at acquiring data, or provided a poor estimation of distance. Recently, the easy access to the RGBD data at real-time frame rate is leading to a revolution in perception and inspired many new research using RGBD data. I propose algorithms to detect persons and understand the activities using RGBD data. I demonstrate the solutions to many computer vision problems may be improved with the added depth channel. The 3D structural information may give rise to algorithms with real-time and view-invariant properties in a faster and easier fashion. When both data sources are available, the features extracted from the depth channel may be combined with traditional features computed from RGB channels to generate more robust systems with enhanced recognition abilities, which may be able to deal with more challenging scenarios. As a starting point, the first problem is to find the persons of various poses in the scene, including moving or static persons. Localizing humans from RGB images is limited by the lighting conditions and background clutter. Depth image gives alternative ways to find the humans in the scene. In the past, detection of humans from range data is usually achieved by tracking, which does not work for indoor person detection. In this thesis, I propose a model based approach to detect the persons using the structural information embedded in the depth image. I propose a 2D head contour model and a 3D head surface model to look for the head-shoulder part of the person. Then, a segmentation scheme is proposed to segment the full human body from the background and extract the contour. I also give a tracking algorithm based on the detection result. I further research on recognizing human actions and activities. I propose two features for recognizing human activities. The first feature is drawn from the skeletal joint locations estimated from a depth image. It is a compact representation of the human posture called histograms of 3D joint locations (HOJ3D). This representation is view-invariant and the whole algorithm runs at real-time. This feature may benefit many applications to get a fast estimation of the posture and action of the human subject. The second feature is a spatio-temporal feature for depth video, which is called Depth Cuboid Similarity Feature (DCSF). The interest points are extracted using an algorithm that effectively suppresses the noise and finds salient human motions. DCSF is extracted centered on each interest point, which forms the description of the video contents. This descriptor can be used to recognize the activities with no dependence on skeleton information or pre-processing steps such as motion segmentation, tracking, or even image de-noising or hole-filling. It is more flexible and widely applicable to many scenarios. Finally, all the features herein developed are combined to solve a novel problem: first-person human activity recognition using RGBD data. Traditional activity recognition algorithms focus on recognizing activities from a third-person perspective. I propose to recognize activities from a first-person perspective with RGBD data. This task is very novel and extremely challenging due to the large amount of camera motion either due to self exploration or the response of the interaction. I extracted 3D optical flow features as the motion descriptor, 3D skeletal joints features as posture descriptors, spatio-temporal features as local appearance descriptors to describe the first-person videos. To address the ego-motion of the camera, I propose an attention mask to guide the recognition procedures and separate the features on the ego-motion region and independent-motion region. The 3D features are very useful at summarizing the discerning information of the activities. In addition, the combination of the 3D features with existing 2D features brings more robust recognition results and make the algorithm capable of dealing with more challenging cases. / text
47

Planning in Inhabited Environments : Human-Aware Task Planning and Activity Recognition

Cirillo, Marcello January 2010 (has links)
Promised some decades ago by researchers in artificial intelligence and robotics as an imminent breakthrough in our everyday lives, a robotic assistant that could work with us in our home and our workplace is a dream still far from being fulfilled. The work presented in this thesis aims at bringing this future vision a little closer to realization. Here, we start from the assumption that an efficient robotic helper should not impose constraints on users' activities, but rather perform its tasks unobtrusively to fulfill its goals and to facilitate people in achieving their objectives.  Also, the helper should be able to consider the outcome of possible future actions by the human users, to assess how those would affect the environment with respect to the agent's objectives, and to predict when its support will be needed. In this thesis we address two highly interconnected problems that are essential for the cohabitation of people and service robots: robot task planning and human activity recognition. First, we present human-aware planning, that is, our approach to robot high-level symbolic reasoning for plan generation. Human-aware planning can be applied in situations where there is a controllable agent, the robot, whose actions we can plan, and one or more uncontrollable agents, the human users, whose future actions we can only try to predict. In our approach, therefore, the knowledge of the users' current and future activities is an important prerequisite. We define human-aware as a new type of planning problem, we formalize the extensions needed by a classical planner to solve such a problem, and we present the implementation of a planner that satisfies all identified requirements. In this thesis we explore also a second issue, which is a prerequisite to the first one: human activity monitoring in intelligent environments. We adopt a knowledge driven approach to activity recognition, whereby a constraint-based domain description is used to correlate sensor readings to human activities. We validate our solutions to both human-aware planning and activity recognition both theoretically and experimentally, describing a number of explanatory examples and test runs in a real environment.
48

E-shape analysis

Sroufe, Paul. Dantu, Ram, January 2009 (has links)
Thesis (M.S.)--University of North Texas, Dec., 2009. / Title from title page display. Includes bibliographical references.
49

Locati[o]n-based activity recognition /

Liao, Lin. January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (p. 123-132).
50

Εφαρμογή ασύρματου δικτύου για την αντιμετώπιση έκτακτης ανάγκης

Κολιόπουλος, Κυριάκος-Άρης 15 April 2013 (has links)
Αντικείμενο της εργασίας αυτής είναι η μελέτη και κατασκευή εφαρμογής ασύρματου δικτύου με σκοπό την αναγνώριση της ανθρώπινης δραστηριότητας και τον εντοπισμό της πτώσης σε πραγματικό χρόνο, καθώς επίσης και την παρακολούθηση των αποτελεσμάτων από απομακρυσμένη τοποθεσία. Στη συγκεκριμένη εργασία σκοπός είναι η αναγνώριση των τεσσάρων βασικών καταστάσεων της ανθρώπινης φυσικής δραστηριότητας (κάθομαι, ξαπλώνω, στέκομαι, κινούμαι) και ο εντοπισμός της πτώσης με χρήση της των επιταχυνσιομέτρων που προσφέρει η πλατφόρμα SunSpot καθώς και η σύνδεση της διάταξης με το διαδίκτυο για την παροχή πληροφορίας σχετικά με την κατάσταση του κατόχου του συστήματος σε απομακρυσμένη τοποθεσία. Πραγματοποιήθηκε μελέτη σχετικά με διάφορες διατάξεις των αισθητήρων ,την συχνότητα δειγματοληψίας, τους αλγορίθμους κατάταξης καθώς και για τις μεθόδους διάθεσης της πληροφορίας στο διαδίκτυο. Για την αναγνώριση των καταστάσεων και τον εντοπισμό της πτώσης χρησιμοποιήθηκαν δυο πλατφόρμες αισθητήρων SunSPOT, μια στο στήθος (master) και μια στο δεξιό τετρακέφαλο (slave) / A wearable wireless sensor network application performing human activity recognition and fall detection using the Naïve Bayesian Classifier algorithm in the SunSpot Platform accompanied by a web application in the Google App Engine platform to be able to monitor the classification results from a remote location and to automatically notify via e-mail in case of emergency.

Page generated in 0.0913 seconds