• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 8
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 69
  • 30
  • 29
  • 21
  • 19
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Recognition of human interactions with vehicles using 3-D models and dynamic context

Lee, Jong Taek, 1983- 11 July 2012 (has links)
This dissertation describes two distinctive methods for human-vehicle interaction recognition: one for ground level videos and the other for aerial videos. For ground level videos, this dissertation presents a novel methodology which is able to estimate a detailed status of a scene involving multiple humans and vehicles. The system tracks their configuration even when they are performing complex interactions with severe occlusion such as when four persons are exiting a car together. The motivation is to identify the 3-D states of vehicles (e.g. status of doors), their relations with persons, which is necessary to analyze complex human-vehicle interactions (e.g. breaking into or stealing a vehicle), and the motion of humans and car doors to detect atomic human-vehicle interactions. A probabilistic algorithm has been designed to track humans and analyze their dynamic relationships with vehicles using a dynamic context. We have focused on two ideas. One is that many simple events can be detected based on a low-level analysis, and these detected events must contextually meet with human/vehicle status tracking results. The other is that the motion clue interferes with states in the current and future frames, and analyzing the motion is critical to detect such simple events. Our approach updates the probability of a person (or a vehicle) having a particular state based on these basic observed events. The probabilistic inference is made for the tracking process to match event-based evidence and motion-based evidence. For aerial videos, the object resolution is low, the visual cues are vague, and the detection and tracking of objects is less reliable as a consequence. Any method that requires accurate tracking of objects or the exact matching of event definition are better avoided. To address these issues, we present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle relationships with the pre-specified event definitions in a piecewise fashion. With special interest in recognizing a person getting into and out of a vehicle, we have tested our method on a subset of the VIRAT Aerial Video dataset and achieved superior results. / text
82

Suivi de l'activité humaine par hypothèses multiples abductives / Human Activity Monitoring with Multiple Abductive Hypotheses

Vettier, Benoît 24 September 2013 (has links)
Ces travaux traitent du suivi de l'activité humaine à travers l'analyse en temps r éel de signaux physiologiques et d'accélé rométrie. Il s'agit de données issues de capteurs ambulatoires ; elles sont bruitées, ambigües, et ne représentent qu'une vision incomplète de la situation. De par la nature des données d'une part, et les besoins fonctionnels de l'application d'autre part, nous considérons que le monde des possibles n'est ni exhaustif ni exclusif, ce qui contraint le mode de raisonnement. Ainsi, nous proposons un raisonnement abductif à base de modèles interconnectés et personnalisés. Ce raisonnement consiste à manipuler un faisceau d'hypothèses au sein d'un cadre dynamique de contraintes, venues tant de l'observateur (en termes d'activités acceptables) que d'exigences non-fonctionnelles, ou portant sur la santé du sujet observé. Le nombre d'hypothèses étudiées à chaque instant est amené à varier, par des mécanismes de Pr édiction-Vérification ; l'adaptation du Cadre participe également à la mise en place d'un pilotage sensible au contexte. Nous proposons un système multi-agent pour représenter ces hypothèses; les agents sont organisés autour d'un environnement partagé qui leur permet d' échanger l'information. Ces échanges et, de manière générale, la détection des contextes d'activation des agents, sont régis par des filtres qui associent une action à des conditions. Le mode de raisonnement et l'organisation de ces agents hétérogènes au sein d'un cadre homogène confèrent au système expressivité, évolutivité et maîtrise des coûts calculatoires. Une implémentation utilisant des données réelles permet d'illustrer les qualités de la proposition. / This proposal deals with human activity monitoring, through the real-time analysis of both physiology data and accelerometry. These data come from ambulatory sensors ; they are noisy and ambiguous, and merely represent a partial and incomplete observation of the current si- tuation. Given the nature of the data on one hand, and the application's required features on the other hand, we consider an Open World of non-exclusive possible situations. This has a restrictive impact on the reasoning engine. We thus propose to use abductive reasoning, based on interconnected and personalized models. This way of reasoning consists in handling a beam of hypotheses, within a dynamic Frame of constraints which come both from the Observer (who defines acceptable situations) and from non-functional expectations, or relating to the observed person's health. The number of hy- potheses at each timestep is wont to vary, by means of Prediction-Verification schemes. The evolution of the Frame leads to context-sensitive adaptive control. We propose a multi-agent system to manage these hypotheses; the agents are organized around a shared environment which allows them to trade information. This interaction and the general detection of activation contexts for the agents are powered and regulated by condition- action filters. The way of reasoning and the organization of heterogeneous agents within a homogeneous Frame lead to a system which we claim to be expressive, evolutive and cost-efficient. An imple- mentation using real sensor data is presented to illustrate these qualities.
83

Analyse et reconnaissance de séquences vidéos d'activités humaines dans l'espace sémantique / Analysis and recognition of human activities in video sequences in the semantic space

Beaudry, Cyrille 26 November 2015 (has links)
Dans cette thèse, nous nous intéressons à la caractérisation et la reconnaissance d'activités humaines dans des vidéos. L'intérêt grandissant en vision par ordinateur pour cette thématique est motivé par une grande variété d'applications telles que l'indexation automatique de vidéos, la vidéo-surveillance, ou encore l'assistance aux personnes âgées. Dans la première partie de nos travaux, nous développons une méthode de reconnaissance d'actions élémentaires basée sur l'estimation du mouvement dans des vidéos. Les points critiques du champ vectoriel obtenu, ainsi que leurs trajectoires, sont estimés à différentes échelles spatio-temporelles. La fusion tardive de caractéristiques d'orientation de mouvement et de variation de gradient, dans le voisinage des points critiques, ainsi que la description fréquentielle des trajectoires, nous permet d'obtenir des taux de reconnaissance parmi les meilleurs de la littérature. Dans la seconde partie, nous construisons une méthode de reconnaissance d'activités en considérant ces dernières comme un enchainement temporel d'actions élémentaires. Notre méthode de reconnaissance d'actions est utilisée pour calculer la probabilité d'actions élémentaires effectuées au cours du temps. Ces séquences de probabilité évoluent sur une variété statistique appelée simplexe sémantique. Une activité est finalement représentée comme une trajectoire dans cet espace. Nous introduisons un descripteur fréquentiel de trajectoire pour classifier les différentes activités humaines en fonction de la forme des trajectoires associées. Ce descripteur prend en compte la géométrie induite par le simplexe sémantique. / This thesis focuses on the characterization and recognition of human activities in videos. This research domain is motivated by a large set of applications such as automatic video indexing, video monitoring or elderly assistance. In the first part of our work, we develop an approach based on the optical flow estimation in video to recognize human elementary actions. From the obtained vector field, we extract critical points and trajectories estimated at different spatio-temporal scales. The late fusion of local characteristics such as motion orientation and shape around critical points, combined with the frequency description of trajectories allow us to obtain one of the best recognition rate among state of art methods. In a second part, we develop a method for recognizing complex human activities by considering them as temporal sequences of elementary actions. In a first step, elementary action probabilities over time is calculated in a video sequence with our first approach. Vectors of action probabilities lie in a statistical manifold called semantic simplex. Activities are then represented as trajectories on this manifold. Finally, a new descriptor is introduced to discriminate between activities from the shape of their associated trajectories. This descriptor takes into account the induced geometry of the simplex manifold.
84

Inteligentní rozpoznání činnosti uživatele chytrého telefonu / Intelligent Recognition of the Smartphone User's Activity

Pustka, Michal January 2018 (has links)
This thesis deals with real-time human activity recognition (eg, running, walking, driving, etc.) using sensors which are available on current mobile devices. The final product of this thesis consists of multiple parts. First, an application for collecting sensor data from mobile devices. Followed by a tool for preprocessing of collected data and creation of a data set. The main part of the thesis is the design of convolutional neural network for activity classification and subsequent use of this network in an Android mobile application. The combination of previous parts creates a comprehensive framework for detection of user activities. Finally, some interesting experiments were made and evaluated (eg, the influence of specific sensors on detection precision).
85

Rozpoznávání lidské aktivity s pomocí senzorů v chytrém telefonu / Human Activity Recognition Using Smartphone

Novák, Andrej January 2016 (has links)
The increase of mobile smartphones continues to grow and with it the demand for automation and use of the most offered aspects of the phone, whether in medicine (health care and surveillance) or in user applications (automatic recognition of position, etc.). As part of this work has been created the designs and implementation of the system for the recognition of human activity on the basis of data processing from sensors of smartphones, along with the determination of the optimal parameters, recovery success rate and comparison of individual evaluation. Other benefits include a draft format and displaying numerous training set consisting of real contributions and their manual evaluation. In addition to the main benefits, the software tool was created to allow the validation of the elements of the training set and acquisition of features from this set and software, that is able with the help of deep learning to train models and then test them.
86

Inkubační chování rybáka dlouhoocasého Sterna paradisaea v extrémních klimatických podmínkách severské tundry / Incubation behavior of the arctic tern Sterna paradisaea in extreme conditions of northern tundra

Hromádková, Tereza January 2015 (has links)
Short breeding period and harsh climatic conditions are major limiting factors to which birds have to adapt in northern tundra regions. Despite this fact, dozen species of birds annually migrate into these regions to increase their chances to breed successfully. My diploma thesis focuses on incubation behaviour of the Arctic tern (Sterna paradisaea). The research for my thesis took place on the Norwegian archipelago Svalbard, in two particular locations: Adolfbukta and Longyearbyen. By using the method of continuous video recording, I described incubation behaviour of this specie in detail. Human activity is very different on each of Adolfbukta and Longyearbyen. On site Adolfbukta study was conducted during seasons 2012 and 2014, both with different predation pressure. Having known that, I could evaluate the impact of human activity as well as the impact of different predation pressure on incubation behaviour and breeding ecology of Arctic tern. The presence of human close to the colony had significant effect on incubation behaviour. Due to higher disturbances, incubating birds tended to leave their nests more often, attention paid to the nest was smaller and calm incubation (sleeping on the nest) was shorter by a half. Human activity had no effect on other displays such as average clutch size or...
87

E‐Shape Analysis

Sroufe, Paul 12 1900 (has links)
The motivation of this work is to understand E-shape analysis and how it can be applied to various classification tasks. It has a powerful feature to not only look at what information is contained, but rather how that information looks. This new technique gives E-shape analysis the ability to be language independent and to some extent size independent. In this thesis, I present a new mechanism to characterize an email without using content or context called E-shape analysis for email. I explore the applications of the email shape by carrying out a case study; botnet detection and two possible applications: spam filtering and social-context based finger printing. The second part of this thesis takes what I apply E-shape analysis to activity recognition of humans. Using the Android platform and a T-Mobile G1 phone I collect data from the triaxial accelerometer and use it to classify the motion behavior of a subject.
88

Deep Learning Approach for Extracting Heart Rate Variability from a Photoplethysmographic Signal

Odinsdottir, Gudny Björk, Larsson, Jesper January 2020 (has links)
Photoplethysmography (PPG) is a method to detect blood volume changes in every heartbeat. The peaks in the PPG signal corresponds to the electrical impulses sent by the heart. The duration between each heartbeat varies, and these variances are better known as heart rate variability (HRV). Thus, finding peaks correctly from PPG signals provides the opportunity to measure an accurate HRV. Additional research indicates that deep learning approaches can extract HRV from a PPG signal with significantly greater accuracy compared to other traditional methods. In this study, deep learning classifiers were built to detect peaks in a noise-contaminated PPG signal and to recognize the performed activity during the data recording. The dataset used in this study is provided by the PhysioBank database consisting of synchronized PPG-, acceleration- and gyro data. The models investigated in this study were limited toa one-layer LSTM network with six varying numbers of neurons and four different window sizes. The most accurate model for the peak classification was the model consisting of 256 neurons and a window size of 15 time steps, with a Matthews correlation coefficient (MCC) of 0.74. The model consisted of64 neurons and a window duration of 1.25 seconds resulted in the most accurate activity classification, with an MCC score of 0.63. Concludingly, more optimization of a deep learning approach could lead to promising accuracy on peak detection and thus an accurate measurement of HRV. The probable cause for the low accuracy of the activity classification problem is the limited data used in this study.
89

Eye Movement Analysis for Activity Recognition in Everyday Situations

Gustafsson, Anton January 2018 (has links)
Den ständigt ökande mängden av smarta enheter i vår vardag har lett till nya problem inom HCI så som hur vi människor ska interagera med dessa enheter på ett effektivt och enkelt sätt. Än så länge har kontextuellt medvetna system visat sig kunna vara ett möjligt sätt att lösa detta problem. Om ett system hade kunnat automatiskt detektera personers aktiviteter och avsikter, kunde det agera utan någon explicit inmatning från användaren. Ögon har tidigare visat sig avslöja mycket information om en persons kognitiva tillstånd och skulle kunna vara en möjlig modalitet för att extrahera aktivitesinformation ifrån.I denna avhandling har vi undersökt möjligheten att detektera aktiviteter genom att använda en billig, hemmabyggd ögonspårningsapparat. Ett experiment utfördes där deltagarna genomförde aktiviteter i ett kök för att samla in data om deras ögonrörelser. Efter experimentet var färdigt, annoterades, förbehandlades och klassificerades datan med hjälp av en multilayer perceptron--och en random forest--klassificerare.Trots att mängden data var relativt liten, visade resultaten att igenkänningsgraden var mellan 30-40% beroende på vilken klassificerare som användes. Detta bekräftar tidigare forskning att aktivitetsigenkänning genom att analysera ögonrörelser är möjligt. Dock visar det även att det fortfarande är svårt att uppnå en hög igenkänningsgrad. / The increasing amount of smart devices in our everyday environment has created new problems within human-computer interaction such as how we humans are supposed to interact with these devices efficiently and with ease. So far, context-aware systems could be a possible candidate to solve this problem. If a system automatically could detect people's activities and intentions, it could act accordingly without any explicit input from the user. Eyes have previously shown to be a rich source of information about a person's cognitive state and current activity. Because of this, eyes could be a viable input modality for extracting information from. In this thesis, we examine the possibility of detecting human activity by using a low cost, home-built monocular eye tracker. An experiment was conducted were participants performed everyday activities in a kitchen to collect eye movement data. After conducting the experiment, the data was annotated, preprocessed and classified using multilayer perceptron and random forest classifiers.Even though the data set collected was small, the results showed a recognition rate of between 30-40% depending on the classifier used. This confirms previous work that activity recognition using eye movement data is possible but that achieving high accuracy is challenging.
90

Automatic Feature Extraction for Human Activity Recognitionon the Edge

Cleve, Oscar, Gustafsson, Sara January 2019 (has links)
This thesis evaluates two methods for automatic feature extraction to classify the accelerometer data of periodic and sporadic human activities. The first method selects features using individual hypothesis tests and the second one is using a random forest classifier as an embedded feature selector. The hypothesis test was combined with a correlation filter in this study. Both methods used the same initial pool of automatically generated time series features. A decision tree classifier was used to perform the human activity recognition task for both methods.The possibility of running the developed model on a processor with limited computing power was taken into consideration when selecting methods for evaluation. The classification results showed that the random forest method was good at prioritizing among features. With 23 features selected it had a macro average F1 score of 0.84 and a weighted average F1 score of 0.93. The first method, however, only had a macro average F1 score of 0.40 and a weighted average F1 score of 0.63 when using the same number of features. In addition to the classification performance this thesis studies the potential business benefits that automation of feature extractioncan result in. / Denna studie utvärderar två metoder som automatiskt extraherar features för att klassificera accelerometerdata från periodiska och sporadiska mänskliga aktiviteter. Den första metoden väljer features genom att använda individuella hypotestester och den andra metoden använder en random forest-klassificerare som en inbäddad feature-väljare. Hypotestestmetoden kombinerades med ett korrelationsfilter i denna studie. Båda metoderna använde samma initiala samling av automatiskt genererade features. En decision tree-klassificerare användes för att utföra klassificeringen av de mänskliga aktiviteterna för båda metoderna. Möjligheten att använda den slutliga modellen på en processor med begränsad hårdvarukapacitet togs i beaktning då studiens metoder valdes. Klassificeringsresultaten visade att random forest-metoden hade god förmåga att prioritera bland features. Med 23 utvalda features erhölls ett makromedelvärde av F1 score på 0,84 och ett viktat medelvärde av F1 score på 0,93. Hypotestestmetoden resulterade i ett makromedelvärde av F1 score på 0,40 och ett viktat medelvärde av F1 score på 0,63 då lika många features valdes ut. Utöver resultat kopplade till klassificeringsproblemet undersöker denna studie även potentiella affärsmässiga fördelar kopplade till automatisk extrahering av features.

Page generated in 0.1112 seconds