Spelling suggestions: "subject:"1activity recognition"" "subject:"2activity recognition""
101 |
Design, Optimization, and Applications of Wearable IoT DevicesJanuary 2020 (has links)
abstract: Movement disorders are becoming one of the leading causes of functional disability due to aging populations and extended life expectancy. Diagnosis, treatment, and rehabilitation currently depend on the behavior observed in a clinical environment. After the patient leaves the clinic, there is no standard approach to continuously monitor the patient and report potential problems. Furthermore, self-recording is inconvenient and unreliable. To address these challenges, wearable health monitoring is emerging as an effective way to augment clinical care for movement disorders.
Wearable devices are being used in many health, fitness, and activity monitoring applications. However, their widespread adoption has been hindered by several adaptation and technical challenges. First, conventional rigid devices are uncomfortable to wear for long periods. Second, wearable devices must operate under very low-energy budgets due to their small battery capacities. Small batteries create a need for frequent recharging, which in turn leads users to stop using them. Third, the usefulness of wearable devices must be demonstrated through high impact applications such that users can get value out of them.
This dissertation presents solutions to solving the challenges faced by wearable devices. First, it presents an open-source hardware/software platform for wearable health monitoring. The proposed platform uses flexible hybrid electronics to enable devices that conform to the shape of the user’s body. Second, it proposes an algorithm to enable recharge-free operation of wearable devices that harvest energy from the environment. The proposed solution maximizes the performance of the wearable device under minimum energy constraints. The results of the proposed algorithm are, on average, within 3% of the optimal solution computed offline. Third, a comprehensive framework for human activity recognition (HAR), one of the first steps towards a solution for movement disorders is presented. It starts with an online learning framework for HAR. Experiments on a low power IoT device (TI-CC2650 MCU) with twenty-two users show 95% accuracy in identifying seven activities and their transitions with less than 12.5 mW power consumption. The online learning framework is accompanied by a transfer learning approach for HAR that determines the number of neural network layers to transfer among uses to enable efficient online learning. Next, a technique to co-optimize the accuracy and active time of wearable applications by utilizing multiple design points with different energy-accuracy trade-offs is presented. The proposed technique switches between the design points at runtime to maximize a generalized objective function under tight harvested energy budget constraints. Finally, we present the first ultra-low-energy hardware accelerator that makes it practical to perform HAR on energy harvested from wearable devices. The accelerator consumes 22.4 microjoules per operation using a commercial 65 nm technology. In summary, the solutions presented in this dissertation can enable the wider adoption of wearable devices. / Dissertation/Thesis / Human activity recognition dataset / Doctoral Dissertation Computer Engineering 2020
|
102 |
Rozpoznávání aktivit z trajektorií pohybujících se objektů / Activity Recognition from Moving Object TrajectoriesSchwarz, Ivan January 2013 (has links)
The aim of this thesis is a development of a system for trajectory-based periodic pattern recognition and following GPS trajectory classification. This system is designed according to a performed analysis of techniques of data mining in moving object data and furthermore, on recent research on a subject of a trajectory-based activity recognition. This system is implemented in C++ programming language and experiments addresing its effectiveness are performed.
|
103 |
A Deep Learning Approach to Video Processing for Scene Recognition in Smart Office EnvironmentsCasserfelt, Karl January 2018 (has links)
The field of computer vision, where the goal is to allow computer systems to interpret and understand image data, has in recent years seen great advances with the emergence of deep learning. Deep learning, a technique that emulates the information processing of the human brain, has been shown to almost solve the problem of object recognition in image data. One of the next big challenges in computer vision is to allow computers to not only recognize objects, but also activities. This study is an exploration of the capabilities of deep learning for the specific problem area of activity recognition in office environments. The study used a re-labeled subset of the AMI Meeting Corpus video data set to comparatively evaluate different neural network models performance in the given problem area, and then evaluated the best performing model on a new novel data set of office activities captured in a research lab in Malmö University. The results showed that the best performing model was a 3D convolutional neural network (3DCNN) with temporal information in the third dimension, however a recurrent convolutional network (RCNN) using a pre-trained VGG16 model to extract features and put into a recurrent neural network with a unidirectional Long-Short-Term-Memory (LSTM) layer performed almost as well with the right configuration. An analysis of the results suggests that a 3DCNN's performance is dependent on the camera angle, specifically how well movement is spatially distributed between people in frame.
|
104 |
A COMPARATIVE STUDY OF DEEP-LEARNING APPROACHES FOR ACTIVITY RECOGNITION USING SENSOR DATA IN SMART OFFICE ENVIRONMENTSJohansson, Alexander, Sandberg, Oscar January 2018 (has links)
Syftet med studien är att jämföra tre deep learning nätverk med varandra för att ta reda på vilket nätverk som kan producera den högsta uppmätta noggrannheten. Noggrannheten mäts genom att nätverken försöker förutspå antalet personer som vistas i rummet där observation äger rum. Utöver att jämföra de tre djupinlärningsnätverk med varandra, kommer vi även att jämföra dem med en traditionell metoder inom maskininlärning - i syfte för att ta reda på ifall djupinlärningsnätverken presterar bättre än vad traditionella metoder gör. I studien används design and creation. Design and creation är en forskningsmetodologi som lägger stor fokus på att utveckla en IT produkt och använda produkten som dess bidrag till ny kunskap. Metodologin har fem olika faser, vi valde att göra en iterativ process mellan utveckling- och utvärderingfaserna. Observation är den datagenereringsmetod som används i studien för att samla in data. Datagenereringen pågick under tre veckor och under tiden hann 31287 rader data registreras i vår databas. Ett av våra nätverk fick vi en noggrannhet på 78.2%, de andra två nätverken fick en noggrannhet på 45.6% respektive 40.3%. För våra traditionella metoder använde vi ett beslutsträd med två olika formler, de producerade en noggrannhet på 61.3% respektive 57.2%. Resultatet av denna studie visar på att utav de tre djupinlärningsnätverken kan endast en av djupinlärningsnätverken producera en högre noggrannhet än de traditionella maskininlärningsmetoderna. Detta resultatet betyder nödvändigtvis inte att djupinlärningsnätverk i allmänhet kan producera en högre noggrannhet än traditionella maskininlärningsmetoder. Ytterligare arbete som kan göras är följande: ytterligare experiment med datasetet och hyperparameter av djupinlärningsnätverken, samla in mer data och korrekt validera denna data och jämföra fler djupinlärningsnätverk och maskininlärningsmetoder. / The purpose of the study is to compare three deep learning networks with each other to evaluate which network can produce the highest prediction accuracy. Accuracy is measured as the networks try to predict the number of people in the room where observation takes place. In addition to comparing the three deep learning networks with each other, we also compare the networks with a traditional machine learning approach - in order to find out if deep learning methods perform better than traditional methods do. This study uses design and creation. Design and creation is a methodology that places great emphasis on developing an IT product and uses the product as its contribution to new knowledge. The methodology has five different phases; we choose to make an iterative process between the development and evaluation phases. Observation is the data generation method used to collect data. Data generation lasted for three weeks, resulting in 31287 rows of data recorded in our database. One of our deep learning networks produced an accuracy of 78.2% meanwhile, the two other approaches produced an accuracy of 45.6% and 40.3% respectively. For our traditional method decision trees were used, we used two different formulas and they produced an accuracy of 61.3% and 57.2% respectively. The result of this thesis shows that out of the three deep learning networks included in this study, only one deep learning network is able to produce a higher predictive accuracy than the traditional ML approaches. This result does not necessarily mean that deep learning approaches in general, are able to produce a higher predictive accuracy than traditional machine learning approaches. Further work that can be made is the following: further experimentation with the dataset and hyperparameters, gather more data and properly validate this data and compare more and other deep learning and machine learning approaches.
|
105 |
Comportements d'agents en mouvement : une approche cognitive pour la reconnaissance d'intentions / Moving agents behaviours : a cognitive approach for intention recognitionVidal, Nicolas 28 September 2014 (has links)
Dans un contexte applicatif de surveillance de zone maritime, nous voulons fournir à un opérateur humain des informations sémantiquement riches et dynamiques relatives aux comportements des entités sous surveillance. Réussir à relier les mesures brutes en provenance d’un système de capteurs aux descriptions abstraites de ces comportements est un problème difficile. Ce dernier est d’ailleurs en général traité en deux temps: tout d’abord, réaliser un prétraitement sur les données hétérogènes, multidimensionnelles et imprécises pour les transformer en un flux d’évènements symbolique, puis utiliser des techniques de reconnaissance de plans sur ces mêmes évènements. Ceci permet de décrire des étapes de plans symboliques de haut niveau sans avoir à se soucier des spécificités des capteurs bas niveau. Cependant, cette première étape est destructrice d’information et de ce fait génère une ambigüité supplémentaire dans le processus de reconnaissance. De plus, séparer les tâches de reconnaissance de comportements est générateur de calculs redondants et rend l’écriture de la bibliothèque de plans plus ardue. Ainsi, nous proposons d’aborder cette problématique sans séparer en deux le processus de reconnaissance. Pour y parvenir, nous proposons un nouveau modèle hiérarchique, inspiré de la théorie des langages formels, nous permettant de construire un pont au-dessus du fossé sémantique séparant les mesures des capteurs des intentions des entités. Grâce à l’aide d’un ensemble d’algorithmes manipulant ce modèle, nous sommes capables, à partir d’observations, de déduire les plausibles futures évolutions de la zone sous surveillance, tout en les justifiant des explications nécessaires. / In a maritime area supervision context, we seek providing a human operator with dynamic information on the behaviors of the monitored entities. Linking raw measurements, coming from sensors, with the abstract descriptions of those behaviors is a tough challenge. This problem is usually addressed with a two-stepped treatment: filtering the multidimensional, heterogeneous and imprecise measurements into symbolic events and then using efficient plan recognition techniques on those events. This allows, among other things, the possibility of describing high level symbolic plan steps without being overwhelmed by low level sensor specificities. However, the first step is information destructive and generates additional ambiguity in the recognition process. Furthermore, splitting the behavior recognition task leads to unnecessary computations and makes the building of the plan library tougher. Thus, we propose to tackle this problem without dividing the solution into two processes. We present a hierarchical model, inspired by the formal language theory, allowing us to describe behaviors in a continuous way, and build a bridge over the semantic gap between measurements and intents. Thanks to a set of algorithms using this model, we are able, from observations, to deduce the possible future developments of the monitored area while providing the appropriate explanations.
|
106 |
Human Activity Recognition and Step Counter Using Smartphone Sensor DataJansson, Fredrik, Sidén, Gustaf January 2022 (has links)
Human Activity Recognition (HAR) is a growing field of research concerned with classifying human activities from sensor data. Modern smartphones contain numerous sensors that could be used to identify the physical activities of the smartphone wearer, which could have applications in sectors such as healthcare, eldercare, and fitness. This project aims to use smartphone sensor data together with machine learning to perform HAR on the following human locomotion activities: standing, walking, running, ascending stairs, descending stairs, and biking. The classification was done using a random forest classifier. Furthermore, in the special case of walking, an algorithm that can count the number of steps in a given data sequence was developed. The step counting algorithm was not based on a previous implementation and could therefore be considered novel. The step counter achieved a testing accuracy of 99.1\% and the HAR classifier a testing accuracy of 100\%. It is speculated that the abnormally high accuracies can be attributed primarily to the lack of data diversity, as in both cases only two persons collected the data. / Mänsklig aktivitetsigenkänning är ett växande forskningsområde som handlar om att klassificera mänskliga aktiviteter från sensordata. Moderna mobiltelefoner innehåller många sensorer som kan användas för att identifiera de fysiska aktiviteterna som bäraren utför, vilket har tillämpningar inom sektorer som sjukvård, äldreomsorg och personlig hälsa. Detta projekt använder sensordata från mobiltelefoner tillsammans med maskininlärning för att utföra aktivitetsigenkänning på följande aktiviteter: stå, gå, springa, gå uppför trappor, gå nedför trappor och cykla. Klassificeringen gjordes med hjälp av en ``random forest''-klassificerare. Vidare utvecklades en algoritm som kan räkna antalet steg i en given datasekvens som samlats in när användaren går. Stegräkningsalgoritmen baserades inte på en tidigare implementering och kan därför betraktas som ny. Stegräknaren uppnådde en testnoggrannhet på 99,1\% och aktivitetsigenkänningen en testnoggrannhet på 100\%. De oväntat höga noggrannheterna antas främst bero på bristen av diversitet i datan, eftersom den endast samlades in av två personer i båda fallen. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
|
107 |
Activity Recogniton Using Accelerometer and Gyroscope Data From Pocket-Worn SmartphonesSöderberg, Oskar, Blommegård, Oscar January 2021 (has links)
Human Activity Recognition (HAR) is a widelyresearched field that has gained importance due to recentadvancements in sensor technology and machine learning. InHAR, sensors are used to identify the activity that a person is performing.In this project, the six everyday life activities walking,biking, sitting, standing, ascending stairs and descending stairsare classified using smartphone accelerometer and gyroscope datacollected by three subjects in their everyday life. To performthe classification, two different machine learning algorithms,Artificial Neural Network (ANN) and Support Vector Machine(SVM) are implemented and compared. Moreover, we comparethe accuracy of the two sensors, both individually and combined.Our results show that the accuracy is higher using only theaccelerometer data compared to using only the gyroscope data.For the accelerometer data, the accuracy is greater than 95%for both algorithms and only between 83-93% using gyroscopedata. Also, there is a small synergy effect when using both sensors,yielding higher accuracy than for any individual sensor data, andreaching 98.5% using ANN. Furthermore, for all sensor types, theANN outperforms the SVM algorithm, having a greater accuracyby more than 1.5-9 percentage points. / Aktivitetsigenkänning är ett noga studeratforskningsområde som växt i popularitet på senare tid på grundav nya framsteg inom sensorteknologi and maskininlärning. Inomaktivitetsigenkänning använder man sensorer för att identifieravilken aktivitet en person utför. I det här projektet undersökervi de sex olika vardagsmotionsaktiviteterna gå, cykla, sitta, stå och gå i trappor (up/ner) med hjälp av data från accelerometeroch gyroskop i en smartphone som samlats in av tre olikapersoner. Två olika maskininlärningsalgoritmer implementerasoch jämförs: Artificial Neural Network (ANN) och SupportVector Machine (SVM). Vidare jämför vi noggranheten förde två sensorna, både individuellt och gemensamt. Våra resultvisar att noggranheten är större när enbart accelerometerdatananvänds jämfört med att använda enbart gyroskopdatan. Föraccelerometerdatan erhålls en noggranhet större än 95 % förbåda algoritmerna medan den siffran bara är mellan 83-93 %för gyroskopdatan. Dessutom existerar det en synergieffekt vidanvändande av båda sensorerna, och noggranheten når då 98.5% vid användande av ANN. Vidare visar våra resultat att ANNhar en noggranhet som är 1.5-9 procentenheter bättre än SVMför alla sensorer. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
|
108 |
Software Defined Radio (SDR) based sensingDahal, Ajaya 10 May 2024 (has links) (PDF)
The history of Software-Defined Radios (SDRs) epitomizes innovation in wireless communication. Initially serving military needs, SDRs swiftly transitioned to civilian applications, revolutionizing communication. This thesis explores SDR applications such as Spectrum Scanning Systems, Contraband Cellphone Detection, and Human Activity Recognition via Wi-Fi signals. SDRs empower Spectrum Scanning Systems to monitor and analyze radio frequencies, optimizing spectrum allocation for seamless wireless communication. In Contraband Cellphone Detection, SDRs identify unauthorized signals in restricted areas, bolstering security efforts by thwarting illicit cellphone usage. Human Activity Recognition utilizes Raspberry Pi 3B+ to track movement patterns via Wi-Fi signals, offering insights across various sectors. Additionally, the thesis conducts a comparative analysis of Wi-Fi-based Human Activity Recognition and Radar for accuracy assessment. SDRs continue to drive innovation, enhancing wireless communication and security in diverse domains, from defense to healthcare and beyond.
|
109 |
Contribution à la reconnaissance non-intrusive d'activités humaines / Contribution to the non-intrusive gratitude of human activitiesTrabelsi, Dorra 25 June 2013 (has links)
La reconnaissance d’activités humaines est un sujet de recherche d’actualité comme en témoignent les nombreux travaux de recherche sur le sujet. Dans ce cadre, la reconnaissance des activités physiques humaines est un domaine émergent avec de nombreuses retombées attendues dans la gestion de l’état de santé des personnes et de certaines maladies, les systèmes de rééducation, etc.Cette thèse vise la proposition d’une approche pour la reconnaissance automatique et non-intrusive d’activités physiques quotidiennes, à travers des capteurs inertiels de type accéléromètres, placés au niveau de certains points clés du corps humain. Les approches de reconnaissance d’activités physiques étudiées dans cette thèse, sont catégorisées en deux parties : la première traite des approches supervisées et la seconde étudie les approches non-supervisées. L’accent est mis plus particulièrement sur les approches non-supervisées ne nécessitant aucune labellisation des données. Ainsi, nous proposons une approche probabiliste pour la modélisation des séries temporelles associées aux données accélérométriques, basée sur un modèle de régression dynamique régi par une chaine de Markov cachée. En considérant les séquences d’accélérations issues de plusieurs capteurs comme des séries temporelles multidimensionnelles, la reconnaissance d’activités humaines se ramène à un problème de segmentation jointe de séries temporelles multidimensionnelles où chaque segment est associé à une activité. L’approche proposée prend en compte l’aspect séquentiel et l’évolution temporelle des données. Les résultats obtenus montrent clairement la supériorité de l’approche proposée par rapport aux autres approches en termes de précision de classification aussi bien des activités statiques et dynamiques, que des transitions entre activités. / Human activity recognition is currently a challengeable research topic as it can be witnessed by the extensive research works that has been conducted recently on this subject. In this context, recognition of physical human activities is an emerging domain with expected impacts in the monitoring of some pathologies and people health status, rehabilitation procedures, etc. In this thesis, we propose a new approach for the automatic recognition of human activity from raw acceleration data measured using inertial wearable sensors placed at key points of the human body. Approaches studied in this thesis are categorized into two parts : the first one deals with supervised-based approaches while the second one treats the unsupervised-based ones. The proposed unsupervised approach is based upon joint segmentation of multidimensional time series using a Hidden Markov Model (HMM) in a multiple regression context where each segment is associated with an activity. The model is learned in an unsupervised framework where no activity labels are needed. The proposed approach takes into account the sequential appearance and temporal evolution of data. The results clearly show the satisfactory results of the proposed approach with respect to other approaches in terms of classification accuracy for static, dynamic and transitional human activities
|
110 |
Multi-modal recognition of manipulation activities through visual accelerometer tracking, relational histograms, and user-adaptationStein, Sebastian January 2014 (has links)
Activity recognition research in computer vision and pervasive computing has made a remarkable trajectory from distinguishing full-body motion patterns to recognizing complex activities. Manipulation activities as occurring in food preparation are particularly challenging to recognize, as they involve many different objects, non-unique task orders and are subject to personal idiosyncrasies. Video data and data from embedded accelerometers provide complementary information, which motivates an investigation of effective methods for fusing these sensor modalities. This thesis proposes a method for multi-modal recognition of manipulation activities that combines accelerometer data and video at multiple stages of the recognition pipeline. A method for accelerometer tracking is introduced that provides for each accelerometer-equipped object a location estimate in the camera view by identifying a point trajectory that matches well the accelerometer data. It is argued that associating accelerometer data with locations in the video provides a key link for modelling interactions between accelerometer-equipped objects and other visual entities in the scene. Estimates of accelerometer locations and their visual displacements are used to extract two new types of features: (i) Reference Tracklet Statistics characterizes statistical properties of an accelerometer's visual trajectory, and (ii) RETLETS, a feature representation that encodes relative motion, uses an accelerometer's visual trajectory as a reference frame for dense tracklets. In comparison to a traditional sensor fusion approach where features are extracted from each sensor-type independently and concatenated for classification, it is shown that combining RETLETS and Reference Tracklet Statistics with those sensor-specific features performs considerably better. Specifically addressing scenarios in which a recognition system would be primarily used by a single person (e.g., cognitive situational support), this thesis investigates three methods for adapting activity models to a target user based on user-specific training data. Via randomized control trials it is shown that these methods indeed learn user idiosyncrasies. All proposed methods are evaluated on two new challenging datasets of food preparation activities that have been made publicly available. Both datasets feature a novel combination of video and accelerometers attached to objects. The Accelerometer Localization dataset is the first publicly available dataset that enables quantitative evaluation of accelerometer tracking algorithms. The 50 Salads dataset contains 50 sequences of people preparing mixed salads with detailed activity annotations.
|
Page generated in 0.0953 seconds