• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reconnaissance d'activités humaines à partir de séquences multi-caméras : application à la détection de chute de personne / Recognition of human activities based on multi-camera sequences : application to people fall detection

Mousse, Ange Mikaël 10 December 2016 (has links)
La vision artificielle est un domaine de recherche en pleine évolution. Les nouvelles stratégies permettent d'avoir des réseaux de caméras intelligentes. Cela induit le développement de beaucoup d'applications de surveillance automatique via les caméras. Les travaux développés dans cette thèse concernent la mise en place d'un système de vidéosurveillance intelligente pour la détection de chutes en temps réel. La première partie de nos travaux consiste à pouvoir estimer de façon robuste la surface d'une personne à partir de deux (02) caméras ayant des vues complémentaires. Cette estimation est issue de la détection de chaque caméra. Dans l'optique d'avoir une détection robuste, nous avons fait recours à deux approches. La première approche consiste à combiner un algorithme de détection de mouvements basé sur la modélisation de l'arrière plan avec un algorithme de détection de contours. Une approche de fusion a été proposée pour rendre beaucoup plus efficiente le résultat de la détection. La seconde approche est basée sur les régions homogènes de l'image. Une première ségmentation est effectuée dans le but de déterminer les régions homogènes de l'image. Et pour finir, nous faisons la modélisation de l'arrière plan en se basant sur les régions. Une fois les pixels de premier plan obtenu, nous faisons une approximation par un polygone dans le but de réduire le nombre d'informations à manipuler. Pour l'estimation de cette surface nous avons proposé une stratégie de fusion dans le but d'agréger les détections des caméras. Cette stratégie conduit à déterminer l'intersection de la projection des divers polygones dans le plan de masse. La projection est basée sur les principes de l'homographie planaire. Une fois l'estimation obtenue, nous avons proposé une stratégie pour détecter les chutes de personnes. Notre approche permet aussi d'avoir une information précise sur les différentes postures de l'individu. Les divers algorithmes proposés ont été implémentés et testés sur des banques de données publiques dans le but de juger l'efficacité des approches proposées par rapport aux approches existantes dans l'état de l'art. Les résultats obtenus et qui ont été détaillés dans le présent manuscrit montrent l'apport de nos algorithmes. / Artificial vision is an involving field of research. The new strategies make it possible to have some autonomous networks of cameras. This leads to the development of many automatic surveillance applications using the cameras. The work developed in this thesis concerns the setting up of an intelligent video surveillance system for real-time people fall detection. The first part of our work consists of a robust estimation of the surface area of a person from two (02) cameras with complementary views. This estimation is based on the detection of each camera. In order to have a robust detection, we propose two approaches. The first approach consists in combining a motion detection algorithm based on the background modeling with an edge detection algorithm. A fusion approach has been proposed to make much more efficient the results of the detection. The second approach is based on the homogeneous regions of the image. A first segmentation is performed to find homogeneous regions of the image. And finally we model the background using obtained regions.
2

IntelliChair : a non-intrusive sitting posture and sitting activity recognition system

Fu, Teng January 2015 (has links)
Current Ambient Intelligence and Intelligent Environment research focuses on the interpretation of a subject’s behaviour at the activity level by logging the Activity of Daily Living (ADL) such as eating, cooking, etc. In general, the sensors employed (e.g. PIR sensors, contact sensors) provide low resolution information. Meanwhile, the expansion of ubiquitous computing allows researchers to gather additional information from different types of sensor which is possible to improve activity analysis. Based on the previous research about sitting posture detection, this research attempts to further analyses human sitting activity. The aim of this research is to use non-intrusive low cost pressure sensor embedded chair system to recognize a subject’s activity by using their detected postures. There are three steps for this research, the first step is to find a hardware solution for low cost sitting posture detection, second step is to find a suitable strategy of sitting posture detection and the last step is to correlate the time-ordered sitting posture sequences with sitting activity. The author initiated a prototype type of sensing system called IntelliChair for sitting posture detection. Two experiments are proceeded in order to determine the hardware architecture of IntelliChair system. The prototype looks at the sensor selection and integration of various sensor and indicates the best for a low cost, non-intrusive system. Subsequently, this research implements signal process theory to explore the frequency feature of sitting posture, for the purpose of determining a suitable sampling rate for IntelliChair system. For second and third step, ten subjects are recruited for the sitting posture data and sitting activity data collection. The former dataset is collected byasking subjects to perform certain pre-defined sitting postures on IntelliChair and it is used for posture recognition experiment. The latter dataset is collected by asking the subjects to perform their normal sitting activity routine on IntelliChair for four hours, and the dataset is used for activity modelling and recognition experiment. For the posture recognition experiment, two Support Vector Machine (SVM) based classifiers are trained (one for spine postures and the other one for leg postures), and their performance evaluated. Hidden Markov Model is utilized for sitting activity modelling and recognition in order to establish the selected sitting activities from sitting posture sequences.2. After experimenting with possible sensors, Force Sensing Resistor (FSR) is selected as the pressure sensing unit for IntelliChair. Eight FSRs are mounted on the seat and back of a chair to gather haptic (i.e., touch-based) posture information. Furthermore, the research explores the possibility of using alternative non-intrusive sensing technology (i.e. vision based Kinect Sensor from Microsoft) and find out the Kinect sensor is not reliable for sitting posture detection due to the joint drifting problem. A suitable sampling rate for IntelliChair is determined according to the experiment result which is 6 Hz. The posture classification performance shows that the SVM based classifier is robust to “familiar” subject data (accuracy is 99.8% with spine postures and 99.9% with leg postures). When dealing with “unfamiliar” subject data, the accuracy is 80.7% for spine posture classification and 42.3% for leg posture classification. The result of activity recognition achieves 41.27% accuracy among four selected activities (i.e. relax, play game, working with PC and watching video). The result of this thesis shows that different individual body characteristics and sitting habits influence both sitting posture and sitting activity recognition. In this case, it suggests that IntelliChair is suitable for individual usage but a training stage is required.

Page generated in 0.1427 seconds