Spelling suggestions: "subject:"1activity recognition"" "subject:"2activity recognition""
111 |
Recognition of human interactions with vehicles using 3-D models and dynamic contextLee, Jong Taek, 1983- 11 July 2012 (has links)
This dissertation describes two distinctive methods for human-vehicle interaction recognition: one for ground level videos and the other for aerial videos. For ground level videos, this dissertation presents a novel methodology which is able to estimate a detailed status of a scene involving multiple humans and vehicles. The system tracks their configuration even when they are performing complex interactions with severe occlusion such as when four persons are exiting a car together. The motivation is to identify the 3-D states of vehicles (e.g. status of doors), their relations with persons, which is necessary to analyze complex human-vehicle interactions (e.g. breaking into or stealing a vehicle), and the motion of humans and car doors to detect atomic human-vehicle interactions. A probabilistic algorithm has been designed to track humans and analyze their dynamic relationships with vehicles using a dynamic context. We have focused on two ideas. One is that many simple events can be detected based on a low-level analysis, and these detected events must contextually meet with human/vehicle status tracking results. The other is that the motion clue interferes with states in the current and future frames, and analyzing the motion is critical to detect such simple events. Our approach updates the probability of a person (or a vehicle) having a particular state based on these basic observed events. The probabilistic inference is made for the tracking process to match event-based evidence and motion-based evidence. For aerial videos, the object resolution is low, the visual cues are vague, and the detection and tracking of objects is less reliable as a consequence. Any method that requires accurate tracking of objects or the exact matching of event definition are better avoided. To address these issues, we present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle relationships with the pre-specified event definitions in a piecewise fashion. With special interest in recognizing a person getting into and out of a vehicle, we have tested our method on a subset of the VIRAT Aerial Video dataset and achieved superior results. / text
|
112 |
Enabling pervasive applications by understanding individual and community behaviorsSun, Lin 12 December 2012 (has links) (PDF)
The digital footprints collected from the prevailing sensing systems provide novel ways to perceive an individual's behaviors. Furthermore, large collections of digital footprints from communities bring novel understandings of human behaviors from the community perspective (community behaviors), such as investigating their characteristics and learning the hidden human intelligence. The perception of human behaviors from the sensing digital footprints enables novel applications for the sensing systems. Bases on the digital footprints collected with accelerometer-embedded mobile phones and GPS equipped taxis, in this dissertation we present our work in recognizing individual behaviors, capturing community behaviors and demonstrating the novel services enabled. With the GPS footprints of a taxi, we summarize the individual anomalous passenger delivery behaviors and improve the recognition efficiency of the existing method iBOAT by introducing an inverted index mechanism. Besides, based on the observations in real life, we propose a method to detect the work-shifting events of an individual taxi. With real-life large-scale GPS traces of thousands of taxis, we investigate the anomalous passenger delivery behaviors and work shifting behaviors from the community perspective and exploit taxi serving strategies. We find that most anomaly behaviors are intentional detours and high detour inclination won't make taxis the top players. And the spatial-temporal distribution of work shifting events in the taxi community reveals their influences. While exploiting taxi serving strategies, we propose a novel method to find the initial intentions in passenger finding. Furthermore, we present a smart taxi system as an example to demonstrate the novel applications that are enabled by the perceived individual and community behaviors
|
113 |
Reconnaissance perceptuelle des objets d’Intérêt : application à l’interprétation des activités instrumentales de la vie quotidienne pour les études de démence / Perceptual object of interest recognition : application to the interpretation of instrumental activities of daily living for dementia studiesBuso, Vincent 30 November 2015 (has links)
Cette thèse est motivée par le diagnostic, l’évaluation, la maintenance et la promotion de l’indépendance des personnes souffrant de maladies démentielles pour leurs activités de la vie quotidienne. Dans ce contexte nous nous intéressons à la reconnaissance automatique des activités de la vie quotidienne.L’analyse des vidéos de type égocentriques (où la caméra est posée sur une personne) a récemment gagné beaucoup d’intérêt en faveur de cette tâche. En effet de récentes études démontrent l’importance cruciale de la reconnaissance des objets actifs (manipulés ou observés par le patient) pour la reconnaissance d’activités et les vidéos égocentriques présentent l’avantage d’avoir une forte différenciation entre les objets actifs et passifs (associés à l’arrière plan). Une des approches récentes envers la reconnaissance des éléments actifs dans une scène est l’incorporation de la saillance visuelle dans les algorithmes de reconnaissance d’objets. Modéliser le processus sélectif du système visuel humain représente un moyen efficace de focaliser l’analyse d’une scène vers les endroits considérés d’intérêts ou saillants,qui, dans les vidéos égocentriques, correspondent fortement aux emplacements des objets d’intérêt. L’objectif de cette thèse est de permettre au systèmes de reconnaissance d’objets de fournir une détection plus précise des objets d’intérêts grâce à la saillance visuelle afin d’améliorer les performances de reconnaissances d’activités de la vie de tous les jours. Cette thèse est menée dans le cadre du projet Européen Dem@care.Concernant le vaste domaine de la modélisation de la saillance visuelle, nous étudions et proposons une contribution à la fois dans le domaine "Bottom-up" (regard attiré par des stimuli) que dans le domaine "Top-down" (regard attiré par la sémantique) qui ont pour but d’améliorer la reconnaissance d’objets actifs dans les vidéos égocentriques. Notre première contribution pour les modèles Bottom-up prend racine du fait que les observateurs d’une vidéo sont normalement attirés par le centre de celle-ci. Ce phénomène biologique s’appelle le biais central. Dans les vidéos égocentriques cependant, cette hypothèse n’est plus valable.Nous proposons et étudions des modèles de saillance basés sur ce phénomène de biais non central.Les modèles proposés sont entrainés à partir de fixations d’oeil enregistrées et incorporées dans des modèles spatio-temporels. Lorsque comparés à l’état-de-l’art des modèles Bottom-up, ceux que nous présentons montrent des résultats prometteurs qui illustrent la nécessité d’un modèle géométrique biaisé non-centré dans ce type de vidéos. Pour notre contribution dans le domaine Top-down, nous présentons un modèle probabiliste d’attention visuelle pour la reconnaissance d’objets manipulés dans les vidéos égocentriques. Bien que les bras soient souvent source d’occlusion des objets et considérés comme un fardeau, ils deviennent un atout dans notre approche. En effet nous extrayons à la fois des caractéristiques globales et locales permettant d’estimer leur disposition géométrique. Nous intégrons cette information dans un modèle probabiliste, avec équations de mise a jour pour optimiser la vraisemblance du modèle en fonction de ses paramètres et enfin générons les cartes d’attention visuelle pour la reconnaissance d’objets manipulés. [...] / The rationale and motivation of this PhD thesis is in the diagnosis, assessment,maintenance and promotion of self-independence of people with dementia in their InstrumentalActivities of Daily Living (IADLs). In this context a strong focus is held towardsthe task of automatically recognizing IADLs. Egocentric video analysis (cameras worn by aperson) has recently gained much interest regarding this goal. Indeed recent studies havedemonstrated how crucial is the recognition of active objects (manipulated or observedby the person wearing the camera) for the activity recognition task and egocentric videospresent the advantage of holding a strong differentiation between active and passive objects(associated to background). One recent approach towards finding active elements in a sceneis the incorporation of visual saliency in the object recognition paradigms. Modeling theselective process of human perception of visual scenes represents an efficient way to drivethe scene analysis towards particular areas considered of interest or salient, which, in egocentricvideos, strongly corresponds to the locus of objects of interest. The objective of thisthesis is to design an object recognition system that relies on visual saliency-maps to providemore precise object representations, that are robust against background clutter and, therefore,improve the recognition of active object for the IADLs recognition task. This PhD thesisis conducted in the framework of the Dem@care European project.Regarding the vast field of visual saliency modeling, we investigate and propose a contributionin both Bottom-up (gaze driven by stimuli) and Top-down (gaze driven by semantics)areas that aim at enhancing the particular task of active object recognition in egocentricvideo content. Our first contribution on Bottom-up models originates from the fact thatobservers are attracted by a central stimulus (the center of an image). This biological phenomenonis known as central bias. In egocentric videos however this hypothesis does not alwayshold. We study saliency models with non-central bias geometrical cues. The proposedvisual saliency models are trained based on eye fixations of observers and incorporated intospatio-temporal saliency models. When compared to state of the art visual saliency models,the ones we present show promising results as they highlight the necessity of a non-centeredgeometric saliency cue. For our top-down model contribution we present a probabilisticvisual attention model for manipulated object recognition in egocentric video content. Althougharms often occlude objects and are usually considered as a burden for many visionsystems, they become an asset in our approach, as we extract both global and local featuresdescribing their geometric layout and pose, as well as the objects being manipulated. We integratethis information in a probabilistic generative model, provide update equations thatautomatically compute the model parameters optimizing the likelihood of the data, and designa method to generate maps of visual attention that are later used in an object-recognitionframework. This task-driven assessment reveals that the proposed method outperforms thestate-of-the-art in object recognition for egocentric video content. [...]
|
114 |
A Simulator Tool for Human Activity RecognitionWestholm, Erik January 2010 (has links)
The goal of this project was to create a simulator that was to produce data for research in the field of activity recognition. The simulator was to simulate a human entity moving around in, and interacting with, a PEIS environment. This simulator ended up being based on The Sims 3, and how this was done is described. The reader is expected to have some experience with programming.
|
115 |
Context Aware Reminder System : Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder Systems / Context Aware Reminder System : Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder SystemsAhmed, Qutub Uddin, Mujib, Saifullah Bin January 2014 (has links)
Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model. / +46707560843
|
116 |
Unsupervised Spatio-Temporal Activity Learning and Recognition in a Stream Processing Framework / Oövervakad maskininlärning och klassificering av spatio-temporala aktiviteter i ett ström-baserat ramverkTiger, Mattias January 2014 (has links)
Learning to recognize and predict common activities, performed by objects and observed by sensors, is an important and challenging problem related both to artificial intelligence and robotics.In this thesis, the general problem of dynamic adaptive situation awareness is considered and we argue for the need for an on-line bottom-up approach.A candidate for a bottom layer is proposed, which we consider to be capable of future extensions that can bring us closer towards the goal.We present a novel approach to adaptive activity learning, where a mapping between raw data and primitive activity concepts are learned and continuously improved on-line and unsupervised. The approach takes streams of observations of objects as input and learns a probabilistic representation of both the observed spatio-temporal activities and their causal relations. The dynamics of the activities are modeled using sparse Gaussian processes and their causal relations using probabilistic graphs.The learned model supports both estimating the most likely activity and predicting the most likely future (and past) activities. Methods and ideas from a wide range of previous work are combined to provide a uniform and efficient way to handle a variety of common problems related to learning, classifying and predicting activities.The framework is evaluated both by learning activities in a simulated traffic monitoring application and by learning the flight patterns of an internally developed autonomous quadcopter system. The conclusion is that our framework is capable of learning the observed activities in real-time with good accuracy.We see this work as a step towards unsupervised learning of activities for robotic systems to adapt to new circumstances autonomously and to learn new activities on the fly that can be detected and predicted immediately. / Att lära sig känna igen och förutsäga vanliga aktiviteter genom att analysera sensordata från observerade objekt är ett viktigt och utmanande problem relaterat både till artificiell intelligens och robotik. I det här exjobbet studerar vi det generella problemet rörande adaptiv situationsmedvetenhet, och vi argumenterar för behovet av ett angreppssätt som arbetar on-line (direkt på ny data) och från botten upp. Vi föreslår en möjlig lösning som vi anser bereder väg för framtida utökningar som kan ta oss närmare detta mål. Vi presenterar en ny metod för adaptiv aktivitetsinlärning, där en mappning mellan rå-data och grundläggande aktivitetskoncept, samt deras kausala relationer, lärs och är kontinuerligt förfinade utan behov av övervakning. Tillvägagångssättet bygger på användandet av strömmar av observationer av objekt, och inlärning sker av en statistik representation för både de observerade spatio-temporala aktiviteterna och deras kausala relationer. Aktiviteternas dynamik modelleras med hjälp av glesa Gaussiska processer och för att modellera aktiviteternas kausala samband används probabilistiska grafer. Givet observationer av ett objekt så stödjer de inlärda modellerna både skattning av den troligaste aktiviteten och förutsägelser av de mest troliga framtida (och dåtida) aktiviteterna utförda. Metoder och idéer från en rad olika tidigare arbeten kombineras på ett sätt som möjliggör ett enhetligt och effektivt sätt att hantera en mängd vanliga problem relaterade till inlärning, klassificering och förutsägelser av aktiviteter. Ramverket är utvärderat genom att dels inlärning av aktiviteter i en simulerad trafikövervakningsapplikation och dels genom inlärning av flygmönster hos ett internt utvecklad quadrocoptersystem. Slutsatsen är att vårt ramverk klarar av att lära sig de observerade aktivisterna i realtid med god noggrannhet. Vi ser detta arbete som ett steg mot oövervakad inlärning av aktiviteter för robotsystem, så att dessa kan anpassa sig till nya förhållanden autonomt och lära sig nya aktiviteter direkt och som då dessutom kan börja detekteras och förutsägas omedelbart.
|
117 |
IntelliChair : a non-intrusive sitting posture and sitting activity recognition systemFu, Teng January 2015 (has links)
Current Ambient Intelligence and Intelligent Environment research focuses on the interpretation of a subject’s behaviour at the activity level by logging the Activity of Daily Living (ADL) such as eating, cooking, etc. In general, the sensors employed (e.g. PIR sensors, contact sensors) provide low resolution information. Meanwhile, the expansion of ubiquitous computing allows researchers to gather additional information from different types of sensor which is possible to improve activity analysis. Based on the previous research about sitting posture detection, this research attempts to further analyses human sitting activity. The aim of this research is to use non-intrusive low cost pressure sensor embedded chair system to recognize a subject’s activity by using their detected postures. There are three steps for this research, the first step is to find a hardware solution for low cost sitting posture detection, second step is to find a suitable strategy of sitting posture detection and the last step is to correlate the time-ordered sitting posture sequences with sitting activity. The author initiated a prototype type of sensing system called IntelliChair for sitting posture detection. Two experiments are proceeded in order to determine the hardware architecture of IntelliChair system. The prototype looks at the sensor selection and integration of various sensor and indicates the best for a low cost, non-intrusive system. Subsequently, this research implements signal process theory to explore the frequency feature of sitting posture, for the purpose of determining a suitable sampling rate for IntelliChair system. For second and third step, ten subjects are recruited for the sitting posture data and sitting activity data collection. The former dataset is collected byasking subjects to perform certain pre-defined sitting postures on IntelliChair and it is used for posture recognition experiment. The latter dataset is collected by asking the subjects to perform their normal sitting activity routine on IntelliChair for four hours, and the dataset is used for activity modelling and recognition experiment. For the posture recognition experiment, two Support Vector Machine (SVM) based classifiers are trained (one for spine postures and the other one for leg postures), and their performance evaluated. Hidden Markov Model is utilized for sitting activity modelling and recognition in order to establish the selected sitting activities from sitting posture sequences.2. After experimenting with possible sensors, Force Sensing Resistor (FSR) is selected as the pressure sensing unit for IntelliChair. Eight FSRs are mounted on the seat and back of a chair to gather haptic (i.e., touch-based) posture information. Furthermore, the research explores the possibility of using alternative non-intrusive sensing technology (i.e. vision based Kinect Sensor from Microsoft) and find out the Kinect sensor is not reliable for sitting posture detection due to the joint drifting problem. A suitable sampling rate for IntelliChair is determined according to the experiment result which is 6 Hz. The posture classification performance shows that the SVM based classifier is robust to “familiar” subject data (accuracy is 99.8% with spine postures and 99.9% with leg postures). When dealing with “unfamiliar” subject data, the accuracy is 80.7% for spine posture classification and 42.3% for leg posture classification. The result of activity recognition achieves 41.27% accuracy among four selected activities (i.e. relax, play game, working with PC and watching video). The result of this thesis shows that different individual body characteristics and sitting habits influence both sitting posture and sitting activity recognition. In this case, it suggests that IntelliChair is suitable for individual usage but a training stage is required.
|
118 |
Analyse et reconnaissance de séquences vidéos d'activités humaines dans l'espace sémantique / Analysis and recognition of human activities in video sequences in the semantic spaceBeaudry, Cyrille 26 November 2015 (has links)
Dans cette thèse, nous nous intéressons à la caractérisation et la reconnaissance d'activités humaines dans des vidéos. L'intérêt grandissant en vision par ordinateur pour cette thématique est motivé par une grande variété d'applications telles que l'indexation automatique de vidéos, la vidéo-surveillance, ou encore l'assistance aux personnes âgées. Dans la première partie de nos travaux, nous développons une méthode de reconnaissance d'actions élémentaires basée sur l'estimation du mouvement dans des vidéos. Les points critiques du champ vectoriel obtenu, ainsi que leurs trajectoires, sont estimés à différentes échelles spatio-temporelles. La fusion tardive de caractéristiques d'orientation de mouvement et de variation de gradient, dans le voisinage des points critiques, ainsi que la description fréquentielle des trajectoires, nous permet d'obtenir des taux de reconnaissance parmi les meilleurs de la littérature. Dans la seconde partie, nous construisons une méthode de reconnaissance d'activités en considérant ces dernières comme un enchainement temporel d'actions élémentaires. Notre méthode de reconnaissance d'actions est utilisée pour calculer la probabilité d'actions élémentaires effectuées au cours du temps. Ces séquences de probabilité évoluent sur une variété statistique appelée simplexe sémantique. Une activité est finalement représentée comme une trajectoire dans cet espace. Nous introduisons un descripteur fréquentiel de trajectoire pour classifier les différentes activités humaines en fonction de la forme des trajectoires associées. Ce descripteur prend en compte la géométrie induite par le simplexe sémantique. / This thesis focuses on the characterization and recognition of human activities in videos. This research domain is motivated by a large set of applications such as automatic video indexing, video monitoring or elderly assistance. In the first part of our work, we develop an approach based on the optical flow estimation in video to recognize human elementary actions. From the obtained vector field, we extract critical points and trajectories estimated at different spatio-temporal scales. The late fusion of local characteristics such as motion orientation and shape around critical points, combined with the frequency description of trajectories allow us to obtain one of the best recognition rate among state of art methods. In a second part, we develop a method for recognizing complex human activities by considering them as temporal sequences of elementary actions. In a first step, elementary action probabilities over time is calculated in a video sequence with our first approach. Vectors of action probabilities lie in a statistical manifold called semantic simplex. Activities are then represented as trajectories on this manifold. Finally, a new descriptor is introduced to discriminate between activities from the shape of their associated trajectories. This descriptor takes into account the induced geometry of the simplex manifold.
|
119 |
Learning discriminative models from structured multi-sensor data for human context recognitionSuutala, J. (Jaakko) 17 June 2012 (has links)
Abstract
In this work, statistical machine learning and pattern recognition methods were developed and applied to sensor-based human context recognition. More precisely, we concentrated on an effective discriminative learning framework, where input-output mapping is learned directly from a labeled dataset. Non-parametric discriminative classification and regression models based on kernel methods were applied. They include support vector machines (SVM) and Gaussian processes (GP), which play a central role in modern statistical machine learning. Based on these established models, we propose various extensions for handling structured data that usually arise from real-life applications, for example, in a field of context-aware computing.
We applied both SVM and GP techniques to handle data with multiple classes in a structured multi-sensor domain. Moreover, a framework for combining data from several sources in this setting was developed using multiple classifiers and fusion rules, where kernel methods are used as base classifiers. We developed two novel methods for handling sequential input and output data. For sequential time-series data, a novel kernel based on graphical presentation, called a weighted walk-based graph kernel (WWGK), is introduced. For sequential output labels, discriminative temporal smoothing (DTS) is proposed. Again, the proposed algorithms are modular, so different kernel classifiers can be used as base models. Finally, we propose a group of techniques based on Gaussian process regression (GPR) and particle filtering (PF) to learn to track multiple targets.
We applied the proposed methodology to three different human-motion-based context recognition applications: person identification, person tracking, and activity recognition, where floor (pressure-sensitive and binary switch) and wearable acceleration sensors are used to measure human motion and gait during walking and other activities. Furthermore, we extracted a useful set of specific high-level features from raw sensor measurements based on time, frequency, and spatial domains for each application. As a result, we developed practical extensions to kernel-based discriminative learning to handle many kinds of structured data applied to human context recognition. / Tiivistelmä
Tässä työssä kehitettiin ja sovellettiin tilastollisen koneoppimisen ja hahmontunnistuksen menetelmiä anturipohjaiseen ihmiseen liittyvän tilannetiedon tunnistamiseen. Esitetyt menetelmät kuuluvat erottelevan oppimisen viitekehykseen, jossa ennustemalli sisääntulomuuttujien ja vastemuuttujan välille voidaan oppia suoraan tunnetuilla vastemuuttujilla nimetystä aineistosta. Parametrittomien erottelevien mallien oppimiseen käytettiin ydinmenetelmiä kuten tukivektorikoneita (SVM) ja Gaussin prosesseja (GP), joita voidaan pitää yhtenä modernin tilastollisen koneoppimisen tärkeimmistä menetelmistä. Työssä kehitettiin näihin menetelmiin liittyviä laajennuksia, joiden avulla rakenteellista aineistoa voidaan mallittaa paremmin reaalimaailman sovelluksissa, esimerkiksi tilannetietoisen laskennan sovellusalueella.
Tutkimuksessa sovellettiin SVM- ja GP-menetelmiä moniluokkaisiin luokitteluongelmiin rakenteellisen monianturitiedon mallituksessa. Useiden tietolähteiden käsittelyyn esitetään menettely, joka yhdistää useat opetetut luokittelijat päätöstason säännöillä lopulliseksi malliksi. Tämän lisäksi aikasarjatiedon käsittelyyn kehitettiin uusi graafiesitykseen perustuva ydinfunktio sekä menettely sekventiaalisten luokkavastemuuttujien käsittelyyn. Nämä voidaan liittää modulaarisesti ydinmenetelmiin perustuviin erotteleviin luokittelijoihin. Lopuksi esitetään tekniikoita usean liikkuvan kohteen seuraamiseen. Menetelmät perustuvat anturitiedosta oppivaan GP-regressiomalliin ja partikkelisuodattimeen.
Työssä esitettyjä menetelmiä sovellettiin kolmessa ihmisen liikkeisiin liittyvässä tilannetiedon tunnistussovelluksessa: henkilön biometrinen tunnistaminen, henkilöiden seuraaminen sekä aktiviteettien tunnistaminen. Näissä sovelluksissa henkilön asentoa, liikkeitä ja astuntaa kävelyn ja muiden aktiviteettien aikana mitattiin kahdella erilaisella paineherkällä lattia-anturilla sekä puettavilla kiihtyvyysantureilla. Tunnistusmenetelmien laajennuksien lisäksi jokaisessa sovelluksessa kehitettiin menetelmiä signaalin segmentointiin ja kuvaavien piirteiden irroittamiseen matalantason anturitiedosta. Tutkimuksen tuloksena saatiin parannuksia erottelevien mallien oppimiseen rakenteellisesta anturitiedosta sekä erityisesti uusia menettelyjä tilannetiedon tunnistamiseen.
|
120 |
Raisonnement distribué dans un environnement ambiant / Distributed reasoning in ambient environnementJarraya, Amina 16 July 2019 (has links)
L’informatique pervasive et l’intelligence ambiante visent à créer un environnement intelligent avec des dispositifs électroniques et informatiques mis en réseau tels que les capteurs, qui s’intègrent parfaitement dans la vie quotidienne et offrent aux utilisateurs un accès transparent aux services partout et à tout moment.Pour garantir ce fonctionnement, un système doit avoir une connaissance globale sur son environnement, et en particulier sur les personnes et les dispositifs, leurs intérêts et leurs capacités, ainsi que les tâches et les activités associées. Toutes ces informations relèvent de la notion de contexte. Cela passe par la collecte des données contextuelles de l’utilisateur pour déterminer sa situation/son activité courante ; on parle alors d’identification de situations/d’activités. Pour cela, le système doit être sensible aux variations de son environnement et de son contexte, afin de détecter les situations/les activités et de s’adapter ensuite dynamiquement. Reconnaître une situation/une activité nécessite alors la mise en place de tout un processus : perception des données contextuelles, analyse de ces données collectéeset raisonnement sur celles-ci pour l’identification de situations/d’activités.Nous nous intéressons plus particulièrement aux aspects liés à la modélisation distribuée de l’environnement ambiant et à ceux liés au raisonnement distribué en présence de données imparfaites pour l’identification de situations/d’activités. Ainsi, la première contribution de la thèse concerne la partie perception. Nous avons proposé un nouveau modèle de perception permettant la collecte des données brutes issues des capteurs déployés dans l’environnement et la génération des évènements. Ensuite, la deuxième contribution se focalise sur l’observation et l’analyse de ces évènements en les segmentant et extrayant les attributs lesplus significatifs et pertinents. Enfin, les deux dernières contributions présentent deux propositions concernant le raisonnement distribué pour l’identification de situations/d’activités; l’une représente la principale contribution et l’autre représente sa version améliorée palliant certaines limites. D'un point de vue technique, toutes ces propositions ont été développées, validées et évaluées avec plusieurs outils. / Pervasive Computing and Ambient Intelligence aim to create a smart environment withnetworked electronic and computer devices such as sensors seamlessly integrating into everyday life and providing users with transparent access to services anywhere and anytime.To ensure this, a system needs to have a global knowledge of its environment, and inparticular about people and devices, their interests and their capabilities, and associated tasks and activities. All these information are related to the concept of context. This involves gathering the user contextual data to determine his/her current situation/activity; we also talk about situation/activity identification. Thus, the system must be sensitive to environment and context changes, in order to detect situations/activities and then to adapt dynamically.Recognizing a situation/an activity requires the definition of a whole process : perception of contextual data, analysis of these collected data and reasoning on them for the identification of situations/activities.We are particularly interested in aspects related to the distributed modeling of the ambient environment and to those related to distributed reasoning in the presence of imperfect data for the identification of situations/activities. Thus, the first contribution of the thesis concerns the perception part. We have proposed a new perception model that allows the gathering of raw data from sensors deployed in the environment and the generation of events.Next, the second contribution focuses on the observation and analysis of these events by segmenting them and extracting the most significant and relevant features. Finally, the last two contributions present two proposals concerning the distributed reasoning for the identification of situations/activities ; one represents the main contribution and the other represents its improved version overcoming certain limitations. From a technical point of view, all these proposals have been developed, validated and evaluated with several tools.
|
Page generated in 0.0735 seconds