• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 68
  • 25
  • 25
  • 17
  • 17
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

WiFi-Based Driver Activity Recognition Using CSI Signal

Bai, Yunhao January 2020 (has links)
No description available.
12

Automated Recognition of Human Activity : A Practical Perspective of the State of Research

Hansson, Hampus, Gyllström, Martin January 2021 (has links)
The rapid development of sensor technology in smartphone and wearable devices has led research to the area of human activity recognition (HAR). As a phase in HAR, applying classification models to collected sensor data is well-researched, and many of the different models can recognize activities successfully. Furthermore, some methods give successful results only using one or two sensors. The use of HAR within pain management is also an existing research field, but applying HAR to the pain treatment strategy of acceptance and commitment therapy (ACT) is not well-documented. The relevance of HAR in this context is that ACT:s core ideas are based on the perspective that daily life activities are connected to pain. In this thesis, state-of-the-art examples for sensor-based HAR applicable to ACT are provided through a literature review. Based on these findings, the practical use is assessed in order to provide a perspective to the current state of research.
13

Using a Smartphone to Detect the Standing-to-Kneeling and Kneeling-to-Standing Postural Transitions / Smartphone-baserad detektion av posturala övergångar mellan stående och knästående ställning

Setterquist, Dan January 2018 (has links)
In this report we investigate how well a smartphone can be used to detect the standing-to-kneeling and kneeling-to-standing postural transitions. Possible applications include measuring time spent kneeling in certain groups of workers prone to knee-straining work. Accelerometer and gyroscope data was recorded from a group of 10 volunteers while performing a set of postural transitions according to an experimental script. The set of postural transitions included the standing-to-kneeling and kneeling-to-standing transitions, in addition to a selection of transitions common in knee-straining occupations. Using recorded video, the recorded data was labeled and segmented into a data set consisting of 3-second sensor data segments in 9 different classes. The classification performance of a number of different LSTM-networks were evaluated on the data set. When evaluated in a user-specific setting, the best network achieved an overall classification accuracy of 89.4 %. The network achieved precision 0.982 and recall 0.917 for the standing-to-kneeling transitions, and precision 0.900 and recall 0.900 for the kneeling-to-standing transitions. When the same network was evaluated in a user-independent setting it achieved an overall accuracy of 66.3 %, with precision 0.720 and recall 0.746 for the standing-to-kneeling transitions, and precision 0.707 and recall 0.604 for the kneeling-to-standing transitions. The network was also evaluated in a setting where only accelerometer data was used. The achieved performance was similar to that achieved when using data from both the accelerometer and gyroscope. The classification speed of the network was evaluated on a smartphone. On a Samsung Galaxy S7 the average time needed to perform one classification was 38.5 milliseconds. The classification can therefore be done in real time. / I denna rapport undersöks möjligheten att använda en smartphone för att upptäcka posturala övergångar mellan stående och knästående ställning. Ett möjligt användningsområde för sådan detektion är att mäta mängd tid spenderad knäståendes hos vissa yrkesgrupper. Accelerometerdata och gyroskopdata spelades in från en grupp av 10 försökspersoner medan de utförde vissa posturala övergångar, vilka inkluderade övergångar från stående till knästående ställning samt från knästående till stående ställning. Genom att granska inspelad video från försöken markerades bitar av den inspelade datan som tillhörandes en viss postural övergång. Datan segmenterades och gav upphov till ett dataset bestående av 3 sekunder långa segment av sensordata i 9 olika klasser. Prestandan för ett antal olika LSTM-nätverk utvärderades på datasetet. Det bästa nätverket uppnådde en övergripande noggrannhet av 89.4 % när det utvärderades användarspecifikt. Nätverket uppnådde en precision av 0.982 och en återkallelse av 0.917 för övergångar från stående till knästående ställning, samt en precision av 0.900 och en återkallelse av 0.900 för övergångar från knästående till stående ställning. När samma nätverk utvärderades användaroberoende uppnådde det en övergripande noggrannhet av 66.3 %, med en precision av 0.720 och återkallelse av 0.746 för övergångar från stående till knästående ställning, samt en precision av 0.707 och återkallelse av 0.604 för övergångar mellan knästående och stående ställning. Nätverket utvärderades också i en konfiguration där enbart accelerometerdata nyttjades, och uppnådde liknande prestanda som när både accelerometerdata och gyroskopdata användes. Nätverkets klassificeringshastighet utvärderades på en smartphone. När klassificeringen utfördes på en Samsung Galaxy S7 var den genomsnittliga körningstiden 38.5 millisekunder, vilket är snabbt nog för att utföras i realtid.
14

Trust in Human Activity Recognition Deep Learning Models

Simons, Ama January 2021 (has links)
Trust is explored in this thesis through the analysis of the robustness of wearable device based artificial intelligence based models to changes in data acquisition. Specifically changes in wearable device hardware and different recording sessions are explored. Three human activity recognition models are used as a vehicle to explore this: Model A which is trained using accelerometer signals recorded by a wearable sensor referred to as Astroskin, Model H which is trained using accelerometer signals from a wearable sensor referred to as the BioHarness and Model A Type 1 which was trained on Astroskin accelerometer signals that was recorded on the first session of the experimental protocol. On a test set recorded by Astroskin Model A had a 99.07% accuracy. However on a test set recorded by the BioHarness Model A had a 65.74% accuracy. On a test set recorded by BioHarness Model H had a 95.37% accuracy. However on a test set recorded by Astroskin Model H had a 29.63% accuracy. Model A Type 1 an average accuracy of 99.57% on data recorded by the same wearable sensor and same session. An average accuracy of 50.95% was obtained on a test set that was recorded by the same wearable sensor but by a different session. An average accuracy of 41.31% was obtained on data that was recorded by a different wearable sensor and same session. An average accuracy of 19.28% was obtained on data that was recorded by a different wearable sensor and different session. An out of domain discriminator for Model A Type 1 was also implemented. The out of domain discriminator was able to differentiate between the data that trained Model A Type 1 and other types (data recorded by a different wearable devices/different sessions) with an accuracy of 97.60%. / Thesis / Master of Applied Science (MASc) / The trustworthiness of artificial intelligence must be explored before society can fully reap its benefits. The element of trust that is explored in this thesis is the robustness of wearable device based artificial intelligence models to changes in data acquisition. The specific changes that are explored are changes in the wearable device used to record the input data as well as input data from different recording sessions. Using human activity recognition models as a vehicle, the results show that performance degradation occurs when the wearable device is changed and when data comes from a different recording session. An out of domain discriminator is developed to alert users when a potential performance degradation can occur.
15

Handwritten signature verification using locally optimized distance-based classification.

Moolla, Yaseen. 28 November 2013 (has links)
Although handwritten signature verification has been extensively researched, it has not achieved optimum accuracy rate. Therefore, efficient and accurate signature verification techniques are required since signatures are still widely used as a means of personal verification. This research work presents efficient distance-based classification techniques as an alternative to supervised learning classification techniques (SLTs). Two different feature extraction techniques were used, namely the Enhanced Modified Direction Feature (EMDF) and the Local Directional Pattern feature (LDP). These were used to analyze the effect of using several different distance-based classification techniques. Among the classification techniques used, are the cosine similarity measure, Mahalanobis, Canberra, Manhattan, Euclidean, weighted Euclidean and fractional distances. Additionally, the novel weighted fractional distances, as well as locally optimized resampling of feature vector sizes were tested. The best accuracy was achieved through applying a combination of the weighted fractional distances and locally optimized resampling classification techniques to the Local Directional Pattern feature extraction. This combination of multiple distance-based classification techniques achieved accuracy rate of 89.2% when using the EMDF feature extraction technique, and 90.8% when using the LDP feature extraction technique. These results are comparable to those in literature, where the same feature extraction techniques were classified with SLTs. The best of the distance-based classification techniques were found to produce greater accuracy than the SLTs. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2012.
16

Trajectory Analytics

Santiteerakul, Wasana 05 1900 (has links)
The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
17

Human Activity Recognition : Deep learning techniques for an upper body exercise classification system

Nardi, Paolo January 2019 (has links)
Most research behind the use of Machine Learning models in the field of Human Activity Recognition focuses mainly on the classification of daily human activities and aerobic exercises. In this study, we focus on the use of 1 accelerometer and 2 gyroscope sensors to build a Deep Learning classifier to recognise 5 different strength exercises, as well as a null class. The strength exercises tested in this research are as followed: Bench press, bent row, deadlift, lateral rises and overhead press. The null class contains recordings of daily activities, such as sitting or walking around the house. The model used in this paper consists on the creation of consecutive overlapping fixed length sliding windows for each exercise, which are processed separately and act as the input for a Deep Convolutional Neural Network. In this study we compare different sliding windows lengths and overlap percentages (step sizes) to obtain the optimal window length and overlap percentage combination. Furthermore, we explore the accuracy results between 1D and 2D Convolutional Neural Networks. Cross validation is also used to check the overall accuracy of the classifiers, where the database used in this paper contains 5 exercises performed by 3 different users and a null class. Overall the models were found to perform accurately for window’s with length of 0.5 seconds or greater and provided a solid foundation to move forward in the creation of a more robust fully integrated model that can recognize a wider variety of exercises.
18

Elderly activity recognition using smartphones and wearable devices / Reconhecimento de atividades de pessoas idosas com smartphone e dispositivos vestíveis

Zimmermann, Larissa Cardoso 13 February 2019 (has links)
Research that involves human-beings depends on the data collection. As technology solutions become popular in the context of healthcare, researchers highlight the need for monitoring and caring patients in situ. Human Activity Recognition (HAR) is a research field that combines two areas: Ubiquitous Computing and Artificial Intelligence. HAR is daily applied in several service sectors including military, security (surveillance), health and entertainment. A HAR system aims to identify and recognize the activities and actions a user performs, in real time or not. Ambient sensors (e.g. cameras) and wearable devices (e.g. smartwatches) collect information about users and their context (e.g. localization, time, companions). This data is processed by machine learning algorithms that extract information and classify the corresponding activity. Although there are several works in the literature related to HAR systems, most studies focusing on elderly users are limited and do not use, as ground truth, data collected from elder volunteers. Databases and sensors reported in the literature are geared towards a generic audience, which leads to loss in accuracy and robustness when targeted at a specific audience. Considering this gap, this work presents a Human Activity Recognition system and corresponding database focusing on the elderly, raising requirements and guidelines for supportive HAR system and the selection of sensor devices. The system evaluation was carried out checking the accuracy of the activity recognition process, defining the best statistical features and classification algorithms for the Elderly Activity Recognition System (EARS). The results suggest that EARS is a promising supportive technology for the elderly, having an accuracy of 98.37% with KNN (k = 1). / Pesquisas e serviços no campo da saúde se valem da coleta, em tempo real ou não, de dados de ordem física, psicológica, sentimental, comportamental, entre outras, de pacientes ou participantes em experimentos: o objetivo é melhorar tratamentos e procedimentos. As soluções tecnológicas estão se tornando populares no contexto da saúde, pesquisadores da área de saúde destacam a necessidade de monitoramento e cuidado dos pacientes in situ. O campo de pesquisa de Reconhecimento de Atividade Humana (sigla em inglês HAR, Human Activity Recognition) envolve as áreas de computação ubíqua e de inteligência artificial, sendo aplicado nos mais diversos domínios. Com o uso de sensores como câmeras, microfones e acelerômetros, entre outros, um sistema HAR tem por tarefa identificar as atividades que uma pessoa realiza em um determinado momento. As informações coletadas pelos sensores e os dados do usuário são processados por algoritmos de aprendizado de máquina para identificar a atividade humana realizada. Apesar de existirem vários trabalhos na literatura de sistemas HAR, poucos são voltados para o público ancião. Bases de dados e sensores reportados em trabalhos relacionados são voltadas para um público genérico, perdendo precisão e robustez quando se trata de um público específico. Diante dessa lacuna, este trabalho propõe um sistema de Reconhecimento de Atividade Humana voltado para o idoso, levantando requisitos para o sistema HAR assistido e selecionando os dispositivos sensores. Um banco de dados HAR com dados coletados de voluntários mais velhos também é fornecido e disponibilizado. A avaliação do sistema foi realizada verificando a acurácia do processo de reconhecimento da atividade, definindo as melhores características estatísticas e algoritmos de classificação para o sistema de reconhecimento de atividades do idoso. Os resultados sugerem que esse sistema é uma tecnologia de suporte promissora para idosos, tendo uma acurácia de 98.37% com KNN (k = 1).
19

Human Motion Anticipation and Recognition from RGB-D

Barsoum, Emad January 2019 (has links)
Predicting and understanding the dynamic of human motion has many applications such as motion synthesis, augmented reality, security, education, reinforcement learning, autonomous vehicles, and many others. In this thesis, we create a novel end-to-end pipeline that can predict multiple future poses from the same input, and, in addition, can classify the entire sequence. Our focus is on the following two aspects of human motion understanding: Probabilistic human action prediction: Given a sequence of human poses as input, we sample multiple possible future poses from the same input sequence using a new GAN-based network. Human motion understanding: Given a sequence of human poses as input, we classify the actual action performed in the sequence and improve the classification performance using the presentation learned from the prediction network. We also demonstrate how to improve model training from noisy labels, using facial expression recognition as an example. More specifically, we have 10 taggers to label each input image, and compare four different approaches: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss. We show that the traditional majority voting scheme does not perform as well as the last two approaches that fully leverage the label distribution. We shared the enhanced FER+ data set with multiple labels for each face image with the research community (https://github.com/Microsoft/FERPlus). For predicting and understanding of human motion, we propose a novel sequence-to-sequence model trained with an improved version of generative adversarial networks (GAN). Our model, which we call HP-GAN2, learns a probability density function of future human poses conditioned on previous poses. It predicts multiple sequences of possible future human poses, each from the same input sequence but seeded with a different vector z drawn from a random distribution. Moreover, to quantify the quality of the non-deterministic predictions, we simultaneously train a motion-quality-assessment model that learns the probability that a given skeleton pose sequence is a real or fake human motion. In order to classify the action performed in a video clip, we took two approaches. In the first approach, we train on a sequence of skeleton poses from scratch using random parameters initialization with the same network architecture used in the discriminator of the HP-GAN2 model. For the second approach, we use the discriminator of the HP-GAN2 network, extend it with an action classification branch, and fine tune the end-to-end model on the classification tasks, since the discriminator in HP-GAN2 learned to differentiate between fake and real human motion. So, our hypothesis is that if the discriminator network can differentiate between synthetic and real skeleton poses, then it also has learned some of the dynamics of a real human motion, and that those dynamics are useful in classification as well. We will show through multiple experiments that that is indeed the case. Therefore, our model learns to predict multiple future sequences of human poses from the same input sequence. We also show that the discriminator learns a general representation of human motion by using the learned features in an action recognition task. And we train a motion-quality-assessment network that measure the probability of a given sequence of poses are valid human poses or not. We test our model on two of the largest human pose datasets: NTURGB-D, and Human3.6M. We train on both single and multiple action types. The predictive power of our model for motion estimation is demonstrated by generating multiple plausible futures from the same input and showing the effect of each of the several loss functions in the ablation study. We also show the advantage of switching to GAN from WGAN-GP, which we used in our previous work. Furthermore, we show that it takes less than half the number of epochs to train an activity recognition network by using the features learned from the discriminator.
20

Human Activity Recognition Based on Transfer Learning

Pang, Jinyong 06 July 2018 (has links)
Human activity recognition (HAR) based on time series data is the problem of classifying various patterns. Its widely applications in health care owns huge commercial benefit. With the increasing spread of smart devices, people have strong desires of customizing services or product adaptive to their features. Deep learning models could handle HAR tasks with a satisfied result. However, training a deep learning model has to consume lots of time and computation resource. Consequently, developing a HAR system effectively becomes a challenging task. In this study, we develop a solid HAR system using Convolutional Neural Network based on transfer learning, which can eliminate those barriers.

Page generated in 0.1509 seconds