• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 68
  • 25
  • 25
  • 17
  • 17
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Human Activity Recognition and Step Counter Using Smartphone Sensor Data

Jansson, Fredrik, Sidén, Gustaf January 2022 (has links)
Human Activity Recognition (HAR) is a growing field of research concerned with classifying human activities from sensor data. Modern smartphones contain numerous sensors that could be used to identify the physical activities of the smartphone wearer, which could have applications in sectors such as healthcare, eldercare, and fitness. This project aims to use smartphone sensor data together with machine learning to perform HAR on the following human locomotion activities: standing, walking, running, ascending stairs, descending stairs, and biking. The classification was done using a random forest classifier. Furthermore, in the special case of walking, an algorithm that can count the number of steps in a given data sequence was developed. The step counting algorithm was not based on a previous implementation and could therefore be considered novel. The step counter achieved a testing accuracy of 99.1\% and the HAR classifier a testing accuracy of 100\%. It is speculated that the abnormally high accuracies can be attributed primarily to the lack of data diversity, as in both cases only two persons collected the data. / Mänsklig aktivitetsigenkänning är ett växande forskningsområde som handlar om att klassificera mänskliga aktiviteter från sensordata. Moderna mobiltelefoner innehåller många sensorer som kan användas för att identifiera de fysiska aktiviteterna som bäraren utför, vilket har tillämpningar inom sektorer som sjukvård, äldreomsorg och personlig hälsa. Detta projekt använder sensordata från mobiltelefoner tillsammans med maskininlärning för att utföra aktivitetsigenkänning på följande aktiviteter: stå, gå, springa, gå uppför trappor, gå nedför trappor och cykla. Klassificeringen gjordes med hjälp av en ``random forest''-klassificerare. Vidare utvecklades en algoritm som kan räkna antalet steg i en given datasekvens som samlats in när användaren går. Stegräkningsalgoritmen baserades inte på en tidigare implementering och kan därför betraktas som ny. Stegräknaren uppnådde en testnoggrannhet på 99,1\% och aktivitetsigenkänningen en testnoggrannhet på 100\%. De oväntat höga noggrannheterna antas främst bero på bristen av diversitet i datan, eftersom den endast samlades in av två personer i båda fallen. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
52

Classifying Pairwise Object Interactions: A Trajectory Analytics Approach

Janmohammadi, Siamak 05 1900 (has links)
We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object in the form of trajectory data. With proliferation of location-enabled devices and ongoing growth in smartphone penetration as well as advancements in exploiting image processing techniques, tracking moving objects is more flawlessly achievable. In this work, we explore some domain-independent qualitative and quantitative features in raw trajectory (spatio-temporal) data in videos captured by a fixed single wide-angle view camera sensor in outdoor areas. We study the efficacy of those features in classifying four basic high level actions by employing two supervised learning algorithms and show how each of the features affect the learning algorithms’ overall accuracy as a single factor or confounded with others.
53

Recognition of human interactions with vehicles using 3-D models and dynamic context

Lee, Jong Taek, 1983- 11 July 2012 (has links)
This dissertation describes two distinctive methods for human-vehicle interaction recognition: one for ground level videos and the other for aerial videos. For ground level videos, this dissertation presents a novel methodology which is able to estimate a detailed status of a scene involving multiple humans and vehicles. The system tracks their configuration even when they are performing complex interactions with severe occlusion such as when four persons are exiting a car together. The motivation is to identify the 3-D states of vehicles (e.g. status of doors), their relations with persons, which is necessary to analyze complex human-vehicle interactions (e.g. breaking into or stealing a vehicle), and the motion of humans and car doors to detect atomic human-vehicle interactions. A probabilistic algorithm has been designed to track humans and analyze their dynamic relationships with vehicles using a dynamic context. We have focused on two ideas. One is that many simple events can be detected based on a low-level analysis, and these detected events must contextually meet with human/vehicle status tracking results. The other is that the motion clue interferes with states in the current and future frames, and analyzing the motion is critical to detect such simple events. Our approach updates the probability of a person (or a vehicle) having a particular state based on these basic observed events. The probabilistic inference is made for the tracking process to match event-based evidence and motion-based evidence. For aerial videos, the object resolution is low, the visual cues are vague, and the detection and tracking of objects is less reliable as a consequence. Any method that requires accurate tracking of objects or the exact matching of event definition are better avoided. To address these issues, we present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle relationships with the pre-specified event definitions in a piecewise fashion. With special interest in recognizing a person getting into and out of a vehicle, we have tested our method on a subset of the VIRAT Aerial Video dataset and achieved superior results. / text
54

Analyse et reconnaissance de séquences vidéos d'activités humaines dans l'espace sémantique / Analysis and recognition of human activities in video sequences in the semantic space

Beaudry, Cyrille 26 November 2015 (has links)
Dans cette thèse, nous nous intéressons à la caractérisation et la reconnaissance d'activités humaines dans des vidéos. L'intérêt grandissant en vision par ordinateur pour cette thématique est motivé par une grande variété d'applications telles que l'indexation automatique de vidéos, la vidéo-surveillance, ou encore l'assistance aux personnes âgées. Dans la première partie de nos travaux, nous développons une méthode de reconnaissance d'actions élémentaires basée sur l'estimation du mouvement dans des vidéos. Les points critiques du champ vectoriel obtenu, ainsi que leurs trajectoires, sont estimés à différentes échelles spatio-temporelles. La fusion tardive de caractéristiques d'orientation de mouvement et de variation de gradient, dans le voisinage des points critiques, ainsi que la description fréquentielle des trajectoires, nous permet d'obtenir des taux de reconnaissance parmi les meilleurs de la littérature. Dans la seconde partie, nous construisons une méthode de reconnaissance d'activités en considérant ces dernières comme un enchainement temporel d'actions élémentaires. Notre méthode de reconnaissance d'actions est utilisée pour calculer la probabilité d'actions élémentaires effectuées au cours du temps. Ces séquences de probabilité évoluent sur une variété statistique appelée simplexe sémantique. Une activité est finalement représentée comme une trajectoire dans cet espace. Nous introduisons un descripteur fréquentiel de trajectoire pour classifier les différentes activités humaines en fonction de la forme des trajectoires associées. Ce descripteur prend en compte la géométrie induite par le simplexe sémantique. / This thesis focuses on the characterization and recognition of human activities in videos. This research domain is motivated by a large set of applications such as automatic video indexing, video monitoring or elderly assistance. In the first part of our work, we develop an approach based on the optical flow estimation in video to recognize human elementary actions. From the obtained vector field, we extract critical points and trajectories estimated at different spatio-temporal scales. The late fusion of local characteristics such as motion orientation and shape around critical points, combined with the frequency description of trajectories allow us to obtain one of the best recognition rate among state of art methods. In a second part, we develop a method for recognizing complex human activities by considering them as temporal sequences of elementary actions. In a first step, elementary action probabilities over time is calculated in a video sequence with our first approach. Vectors of action probabilities lie in a statistical manifold called semantic simplex. Activities are then represented as trajectories on this manifold. Finally, a new descriptor is introduced to discriminate between activities from the shape of their associated trajectories. This descriptor takes into account the induced geometry of the simplex manifold.
55

Inteligentní rozpoznání činnosti uživatele chytrého telefonu / Intelligent Recognition of the Smartphone User's Activity

Pustka, Michal January 2018 (has links)
This thesis deals with real-time human activity recognition (eg, running, walking, driving, etc.) using sensors which are available on current mobile devices. The final product of this thesis consists of multiple parts. First, an application for collecting sensor data from mobile devices. Followed by a tool for preprocessing of collected data and creation of a data set. The main part of the thesis is the design of convolutional neural network for activity classification and subsequent use of this network in an Android mobile application. The combination of previous parts creates a comprehensive framework for detection of user activities. Finally, some interesting experiments were made and evaluated (eg, the influence of specific sensors on detection precision).
56

Rozpoznávání lidské aktivity s pomocí senzorů v chytrém telefonu / Human Activity Recognition Using Smartphone

Novák, Andrej January 2016 (has links)
The increase of mobile smartphones continues to grow and with it the demand for automation and use of the most offered aspects of the phone, whether in medicine (health care and surveillance) or in user applications (automatic recognition of position, etc.). As part of this work has been created the designs and implementation of the system for the recognition of human activity on the basis of data processing from sensors of smartphones, along with the determination of the optimal parameters, recovery success rate and comparison of individual evaluation. Other benefits include a draft format and displaying numerous training set consisting of real contributions and their manual evaluation. In addition to the main benefits, the software tool was created to allow the validation of the elements of the training set and acquisition of features from this set and software, that is able with the help of deep learning to train models and then test them.
57

E‐Shape Analysis

Sroufe, Paul 12 1900 (has links)
The motivation of this work is to understand E-shape analysis and how it can be applied to various classification tasks. It has a powerful feature to not only look at what information is contained, but rather how that information looks. This new technique gives E-shape analysis the ability to be language independent and to some extent size independent. In this thesis, I present a new mechanism to characterize an email without using content or context called E-shape analysis for email. I explore the applications of the email shape by carrying out a case study; botnet detection and two possible applications: spam filtering and social-context based finger printing. The second part of this thesis takes what I apply E-shape analysis to activity recognition of humans. Using the Android platform and a T-Mobile G1 phone I collect data from the triaxial accelerometer and use it to classify the motion behavior of a subject.
58

Deep Learning Approach for Extracting Heart Rate Variability from a Photoplethysmographic Signal

Odinsdottir, Gudny Björk, Larsson, Jesper January 2020 (has links)
Photoplethysmography (PPG) is a method to detect blood volume changes in every heartbeat. The peaks in the PPG signal corresponds to the electrical impulses sent by the heart. The duration between each heartbeat varies, and these variances are better known as heart rate variability (HRV). Thus, finding peaks correctly from PPG signals provides the opportunity to measure an accurate HRV. Additional research indicates that deep learning approaches can extract HRV from a PPG signal with significantly greater accuracy compared to other traditional methods. In this study, deep learning classifiers were built to detect peaks in a noise-contaminated PPG signal and to recognize the performed activity during the data recording. The dataset used in this study is provided by the PhysioBank database consisting of synchronized PPG-, acceleration- and gyro data. The models investigated in this study were limited toa one-layer LSTM network with six varying numbers of neurons and four different window sizes. The most accurate model for the peak classification was the model consisting of 256 neurons and a window size of 15 time steps, with a Matthews correlation coefficient (MCC) of 0.74. The model consisted of64 neurons and a window duration of 1.25 seconds resulted in the most accurate activity classification, with an MCC score of 0.63. Concludingly, more optimization of a deep learning approach could lead to promising accuracy on peak detection and thus an accurate measurement of HRV. The probable cause for the low accuracy of the activity classification problem is the limited data used in this study.
59

Eye Movement Analysis for Activity Recognition in Everyday Situations

Gustafsson, Anton January 2018 (has links)
Den ständigt ökande mängden av smarta enheter i vår vardag har lett till nya problem inom HCI så som hur vi människor ska interagera med dessa enheter på ett effektivt och enkelt sätt. Än så länge har kontextuellt medvetna system visat sig kunna vara ett möjligt sätt att lösa detta problem. Om ett system hade kunnat automatiskt detektera personers aktiviteter och avsikter, kunde det agera utan någon explicit inmatning från användaren. Ögon har tidigare visat sig avslöja mycket information om en persons kognitiva tillstånd och skulle kunna vara en möjlig modalitet för att extrahera aktivitesinformation ifrån.I denna avhandling har vi undersökt möjligheten att detektera aktiviteter genom att använda en billig, hemmabyggd ögonspårningsapparat. Ett experiment utfördes där deltagarna genomförde aktiviteter i ett kök för att samla in data om deras ögonrörelser. Efter experimentet var färdigt, annoterades, förbehandlades och klassificerades datan med hjälp av en multilayer perceptron--och en random forest--klassificerare.Trots att mängden data var relativt liten, visade resultaten att igenkänningsgraden var mellan 30-40% beroende på vilken klassificerare som användes. Detta bekräftar tidigare forskning att aktivitetsigenkänning genom att analysera ögonrörelser är möjligt. Dock visar det även att det fortfarande är svårt att uppnå en hög igenkänningsgrad. / The increasing amount of smart devices in our everyday environment has created new problems within human-computer interaction such as how we humans are supposed to interact with these devices efficiently and with ease. So far, context-aware systems could be a possible candidate to solve this problem. If a system automatically could detect people's activities and intentions, it could act accordingly without any explicit input from the user. Eyes have previously shown to be a rich source of information about a person's cognitive state and current activity. Because of this, eyes could be a viable input modality for extracting information from. In this thesis, we examine the possibility of detecting human activity by using a low cost, home-built monocular eye tracker. An experiment was conducted were participants performed everyday activities in a kitchen to collect eye movement data. After conducting the experiment, the data was annotated, preprocessed and classified using multilayer perceptron and random forest classifiers.Even though the data set collected was small, the results showed a recognition rate of between 30-40% depending on the classifier used. This confirms previous work that activity recognition using eye movement data is possible but that achieving high accuracy is challenging.
60

Automatic Feature Extraction for Human Activity Recognitionon the Edge

Cleve, Oscar, Gustafsson, Sara January 2019 (has links)
This thesis evaluates two methods for automatic feature extraction to classify the accelerometer data of periodic and sporadic human activities. The first method selects features using individual hypothesis tests and the second one is using a random forest classifier as an embedded feature selector. The hypothesis test was combined with a correlation filter in this study. Both methods used the same initial pool of automatically generated time series features. A decision tree classifier was used to perform the human activity recognition task for both methods.The possibility of running the developed model on a processor with limited computing power was taken into consideration when selecting methods for evaluation. The classification results showed that the random forest method was good at prioritizing among features. With 23 features selected it had a macro average F1 score of 0.84 and a weighted average F1 score of 0.93. The first method, however, only had a macro average F1 score of 0.40 and a weighted average F1 score of 0.63 when using the same number of features. In addition to the classification performance this thesis studies the potential business benefits that automation of feature extractioncan result in. / Denna studie utvärderar två metoder som automatiskt extraherar features för att klassificera accelerometerdata från periodiska och sporadiska mänskliga aktiviteter. Den första metoden väljer features genom att använda individuella hypotestester och den andra metoden använder en random forest-klassificerare som en inbäddad feature-väljare. Hypotestestmetoden kombinerades med ett korrelationsfilter i denna studie. Båda metoderna använde samma initiala samling av automatiskt genererade features. En decision tree-klassificerare användes för att utföra klassificeringen av de mänskliga aktiviteterna för båda metoderna. Möjligheten att använda den slutliga modellen på en processor med begränsad hårdvarukapacitet togs i beaktning då studiens metoder valdes. Klassificeringsresultaten visade att random forest-metoden hade god förmåga att prioritera bland features. Med 23 utvalda features erhölls ett makromedelvärde av F1 score på 0,84 och ett viktat medelvärde av F1 score på 0,93. Hypotestestmetoden resulterade i ett makromedelvärde av F1 score på 0,40 och ett viktat medelvärde av F1 score på 0,63 då lika många features valdes ut. Utöver resultat kopplade till klassificeringsproblemet undersöker denna studie även potentiella affärsmässiga fördelar kopplade till automatisk extrahering av features.

Page generated in 0.3407 seconds