• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 9
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 154
  • 154
  • 80
  • 64
  • 59
  • 43
  • 38
  • 25
  • 22
  • 20
  • 20
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Eye Movement Analysis for Activity Recognition in Everyday Situations

Gustafsson, Anton January 2018 (has links)
Den ständigt ökande mängden av smarta enheter i vår vardag har lett till nya problem inom HCI så som hur vi människor ska interagera med dessa enheter på ett effektivt och enkelt sätt. Än så länge har kontextuellt medvetna system visat sig kunna vara ett möjligt sätt att lösa detta problem. Om ett system hade kunnat automatiskt detektera personers aktiviteter och avsikter, kunde det agera utan någon explicit inmatning från användaren. Ögon har tidigare visat sig avslöja mycket information om en persons kognitiva tillstånd och skulle kunna vara en möjlig modalitet för att extrahera aktivitesinformation ifrån.I denna avhandling har vi undersökt möjligheten att detektera aktiviteter genom att använda en billig, hemmabyggd ögonspårningsapparat. Ett experiment utfördes där deltagarna genomförde aktiviteter i ett kök för att samla in data om deras ögonrörelser. Efter experimentet var färdigt, annoterades, förbehandlades och klassificerades datan med hjälp av en multilayer perceptron--och en random forest--klassificerare.Trots att mängden data var relativt liten, visade resultaten att igenkänningsgraden var mellan 30-40% beroende på vilken klassificerare som användes. Detta bekräftar tidigare forskning att aktivitetsigenkänning genom att analysera ögonrörelser är möjligt. Dock visar det även att det fortfarande är svårt att uppnå en hög igenkänningsgrad. / The increasing amount of smart devices in our everyday environment has created new problems within human-computer interaction such as how we humans are supposed to interact with these devices efficiently and with ease. So far, context-aware systems could be a possible candidate to solve this problem. If a system automatically could detect people's activities and intentions, it could act accordingly without any explicit input from the user. Eyes have previously shown to be a rich source of information about a person's cognitive state and current activity. Because of this, eyes could be a viable input modality for extracting information from. In this thesis, we examine the possibility of detecting human activity by using a low cost, home-built monocular eye tracker. An experiment was conducted were participants performed everyday activities in a kitchen to collect eye movement data. After conducting the experiment, the data was annotated, preprocessed and classified using multilayer perceptron and random forest classifiers.Even though the data set collected was small, the results showed a recognition rate of between 30-40% depending on the classifier used. This confirms previous work that activity recognition using eye movement data is possible but that achieving high accuracy is challenging.
32

Nonparametric Discovery of Human Behavior Patterns from Multimodal Data

Sun, Feng-Tso 01 May 2014 (has links)
Recent advances in sensor technologies and the growing interest in context- aware applications, such as targeted advertising and location-based services, have led to a demand for understanding human behavior patterns from sensor data. People engage in routine behaviors. Automatic routine discovery goes beyond low-level activity recognition such as sitting or standing and analyzes human behaviors at a higher level (e.g., commuting to work). The goal of the research presented in this thesis is to automatically discover high-level semantic human routines from low-level sensor streams. One recent line of research is to mine human routines from sensor data using parametric topic models. The main shortcoming of parametric models is that they assume a fixed, pre-specified parameter regardless of the data. Choosing an appropriate parameter usually requires an inefficient trial-and-error model selection process. Furthermore, it is even more difficult to find optimal parameter values in advance for personalized applications. The research presented in this thesis offers a novel nonparametric framework for human routine discovery that can infer high-level routines without knowing the number of latent low-level activities beforehand. More specifically, the frame-work automatically finds the size of the low-level feature vocabulary from sensor feature vectors at the vocabulary extraction phase. At the routine discovery phase, the framework further automatically selects the appropriate number of latent low-level activities and discovers latent routines. Moreover, we propose a new generative graphical model to incorporate multimodal sensor streams for the human activity discovery task. The hypothesis and approaches presented in this thesis are evaluated on public datasets in two routine domains: two daily-activity datasets and a transportation mode dataset. Experimental results show that our nonparametric framework can automatically learn the appropriate model parameters from multimodal sensor data without any form of manual model selection procedure and can outperform traditional parametric approaches for human routine discovery tasks.
33

Activity retrieval in closed captioned videos

Gupta, Sonal 2009 August 1900 (has links)
Recognizing activities in real-world videos is a difficult problem exacerbated by background clutter, changes in camera angle & zoom, occlusion and rapid camera movements. Large corpora of labeled videos can be used to train automated activity recognition systems, but this requires expensive human labor and time. This thesis explores how closed captions that naturally accompany many videos can act as weak supervision that allows automatically collecting 'labeled' data for activity recognition. We show that such an approach can improve activity retrieval in soccer videos. Our system requires no manual labeling of video clips and needs minimal human supervision. We also present a novel caption classifier that uses additional linguistic information to determine whether a specific comment refers to an ongoing activity. We demonstrate that combining linguistic analysis and automatically trained activity recognizers can significantly improve the precision of video retrieval. / text
34

Handwritten signature verification using locally optimized distance-based classification.

Moolla, Yaseen. 28 November 2013 (has links)
Although handwritten signature verification has been extensively researched, it has not achieved optimum accuracy rate. Therefore, efficient and accurate signature verification techniques are required since signatures are still widely used as a means of personal verification. This research work presents efficient distance-based classification techniques as an alternative to supervised learning classification techniques (SLTs). Two different feature extraction techniques were used, namely the Enhanced Modified Direction Feature (EMDF) and the Local Directional Pattern feature (LDP). These were used to analyze the effect of using several different distance-based classification techniques. Among the classification techniques used, are the cosine similarity measure, Mahalanobis, Canberra, Manhattan, Euclidean, weighted Euclidean and fractional distances. Additionally, the novel weighted fractional distances, as well as locally optimized resampling of feature vector sizes were tested. The best accuracy was achieved through applying a combination of the weighted fractional distances and locally optimized resampling classification techniques to the Local Directional Pattern feature extraction. This combination of multiple distance-based classification techniques achieved accuracy rate of 89.2% when using the EMDF feature extraction technique, and 90.8% when using the LDP feature extraction technique. These results are comparable to those in literature, where the same feature extraction techniques were classified with SLTs. The best of the distance-based classification techniques were found to produce greater accuracy than the SLTs. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2012.
35

Planetary navigation activity recognition using wearable accelerometer data

Song, Wen January 1900 (has links)
Master of Science / Department of Electrical & Computer Engineering / Steve Warren / Activity recognition can be an important part of human health awareness. Many benefits can be generated from the recognition results, including knowledge of activity intensity as it relates to wellness over time. Various activity-recognition techniques have been presented in the literature, though most address simple activity-data collection and off-line analysis. More sophisticated real-time identification is less often addressed. Therefore, it is promising to consider the combination of current off-line, activity-detection methods with wearable, embedded tools in order to create a real-time wireless human activity recognition system with improved accuracy. Different from previous work on activity recognition, the goal of this effort is to focus on specific activities that an astronaut may encounter during a mission. Planetary navigation field test (PNFT) tasks are designed to meet this need. The approach used by the KSU team is to pre-record data on the ground in normal earth gravity and seek signal features that can be used to identify, and even predict, fatigue associated with these activities. The eventual goal is to then assess/predict the condition of an astronaut in a reduced-gravity environment using these predetermined rules. Several classic machine learning algorithms, including the k-Nearest Neighbor, Naïve Bayes, C4.5 Decision Tree, and Support Vector Machine approaches, were applied to these data to identify recognition algorithms suitable for real-time application. Graphical user interfaces (GUIs) were designed for both MATLAB and LabVIEW environments to facilitate recording and data analysis. Training data for the machine learning algorithms were recorded while subjects performed each activity, and then these identification approaches were applied to new data sets with an identification accuracy of around 86%. Early results indicate that a single three-axis accelerometer is sufficient to identify the occurrence of a given PNFT activity. A custom, embedded acceleration monitoring system employing ZigBee transmission is under development for future real-time activity recognition studies. A different GUI has been implemented for this system, which uses an on-line algorithm that will seek to identify activity at a refresh rate of 1 Hz.
36

Human Activity Recognition : Deep learning techniques for an upper body exercise classification system

Nardi, Paolo January 2019 (has links)
Most research behind the use of Machine Learning models in the field of Human Activity Recognition focuses mainly on the classification of daily human activities and aerobic exercises. In this study, we focus on the use of 1 accelerometer and 2 gyroscope sensors to build a Deep Learning classifier to recognise 5 different strength exercises, as well as a null class. The strength exercises tested in this research are as followed: Bench press, bent row, deadlift, lateral rises and overhead press. The null class contains recordings of daily activities, such as sitting or walking around the house. The model used in this paper consists on the creation of consecutive overlapping fixed length sliding windows for each exercise, which are processed separately and act as the input for a Deep Convolutional Neural Network. In this study we compare different sliding windows lengths and overlap percentages (step sizes) to obtain the optimal window length and overlap percentage combination. Furthermore, we explore the accuracy results between 1D and 2D Convolutional Neural Networks. Cross validation is also used to check the overall accuracy of the classifiers, where the database used in this paper contains 5 exercises performed by 3 different users and a null class. Overall the models were found to perform accurately for window’s with length of 0.5 seconds or greater and provided a solid foundation to move forward in the creation of a more robust fully integrated model that can recognize a wider variety of exercises.
37

Elderly activity recognition using smartphones and wearable devices / Reconhecimento de atividades de pessoas idosas com smartphone e dispositivos vestíveis

Zimmermann, Larissa Cardoso 13 February 2019 (has links)
Research that involves human-beings depends on the data collection. As technology solutions become popular in the context of healthcare, researchers highlight the need for monitoring and caring patients in situ. Human Activity Recognition (HAR) is a research field that combines two areas: Ubiquitous Computing and Artificial Intelligence. HAR is daily applied in several service sectors including military, security (surveillance), health and entertainment. A HAR system aims to identify and recognize the activities and actions a user performs, in real time or not. Ambient sensors (e.g. cameras) and wearable devices (e.g. smartwatches) collect information about users and their context (e.g. localization, time, companions). This data is processed by machine learning algorithms that extract information and classify the corresponding activity. Although there are several works in the literature related to HAR systems, most studies focusing on elderly users are limited and do not use, as ground truth, data collected from elder volunteers. Databases and sensors reported in the literature are geared towards a generic audience, which leads to loss in accuracy and robustness when targeted at a specific audience. Considering this gap, this work presents a Human Activity Recognition system and corresponding database focusing on the elderly, raising requirements and guidelines for supportive HAR system and the selection of sensor devices. The system evaluation was carried out checking the accuracy of the activity recognition process, defining the best statistical features and classification algorithms for the Elderly Activity Recognition System (EARS). The results suggest that EARS is a promising supportive technology for the elderly, having an accuracy of 98.37% with KNN (k = 1). / Pesquisas e serviços no campo da saúde se valem da coleta, em tempo real ou não, de dados de ordem física, psicológica, sentimental, comportamental, entre outras, de pacientes ou participantes em experimentos: o objetivo é melhorar tratamentos e procedimentos. As soluções tecnológicas estão se tornando populares no contexto da saúde, pesquisadores da área de saúde destacam a necessidade de monitoramento e cuidado dos pacientes in situ. O campo de pesquisa de Reconhecimento de Atividade Humana (sigla em inglês HAR, Human Activity Recognition) envolve as áreas de computação ubíqua e de inteligência artificial, sendo aplicado nos mais diversos domínios. Com o uso de sensores como câmeras, microfones e acelerômetros, entre outros, um sistema HAR tem por tarefa identificar as atividades que uma pessoa realiza em um determinado momento. As informações coletadas pelos sensores e os dados do usuário são processados por algoritmos de aprendizado de máquina para identificar a atividade humana realizada. Apesar de existirem vários trabalhos na literatura de sistemas HAR, poucos são voltados para o público ancião. Bases de dados e sensores reportados em trabalhos relacionados são voltadas para um público genérico, perdendo precisão e robustez quando se trata de um público específico. Diante dessa lacuna, este trabalho propõe um sistema de Reconhecimento de Atividade Humana voltado para o idoso, levantando requisitos para o sistema HAR assistido e selecionando os dispositivos sensores. Um banco de dados HAR com dados coletados de voluntários mais velhos também é fornecido e disponibilizado. A avaliação do sistema foi realizada verificando a acurácia do processo de reconhecimento da atividade, definindo as melhores características estatísticas e algoritmos de classificação para o sistema de reconhecimento de atividades do idoso. Os resultados sugerem que esse sistema é uma tecnologia de suporte promissora para idosos, tendo uma acurácia de 98.37% com KNN (k = 1).
38

Human Motion Anticipation and Recognition from RGB-D

Barsoum, Emad January 2019 (has links)
Predicting and understanding the dynamic of human motion has many applications such as motion synthesis, augmented reality, security, education, reinforcement learning, autonomous vehicles, and many others. In this thesis, we create a novel end-to-end pipeline that can predict multiple future poses from the same input, and, in addition, can classify the entire sequence. Our focus is on the following two aspects of human motion understanding: Probabilistic human action prediction: Given a sequence of human poses as input, we sample multiple possible future poses from the same input sequence using a new GAN-based network. Human motion understanding: Given a sequence of human poses as input, we classify the actual action performed in the sequence and improve the classification performance using the presentation learned from the prediction network. We also demonstrate how to improve model training from noisy labels, using facial expression recognition as an example. More specifically, we have 10 taggers to label each input image, and compare four different approaches: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss. We show that the traditional majority voting scheme does not perform as well as the last two approaches that fully leverage the label distribution. We shared the enhanced FER+ data set with multiple labels for each face image with the research community (https://github.com/Microsoft/FERPlus). For predicting and understanding of human motion, we propose a novel sequence-to-sequence model trained with an improved version of generative adversarial networks (GAN). Our model, which we call HP-GAN2, learns a probability density function of future human poses conditioned on previous poses. It predicts multiple sequences of possible future human poses, each from the same input sequence but seeded with a different vector z drawn from a random distribution. Moreover, to quantify the quality of the non-deterministic predictions, we simultaneously train a motion-quality-assessment model that learns the probability that a given skeleton pose sequence is a real or fake human motion. In order to classify the action performed in a video clip, we took two approaches. In the first approach, we train on a sequence of skeleton poses from scratch using random parameters initialization with the same network architecture used in the discriminator of the HP-GAN2 model. For the second approach, we use the discriminator of the HP-GAN2 network, extend it with an action classification branch, and fine tune the end-to-end model on the classification tasks, since the discriminator in HP-GAN2 learned to differentiate between fake and real human motion. So, our hypothesis is that if the discriminator network can differentiate between synthetic and real skeleton poses, then it also has learned some of the dynamics of a real human motion, and that those dynamics are useful in classification as well. We will show through multiple experiments that that is indeed the case. Therefore, our model learns to predict multiple future sequences of human poses from the same input sequence. We also show that the discriminator learns a general representation of human motion by using the learned features in an action recognition task. And we train a motion-quality-assessment network that measure the probability of a given sequence of poses are valid human poses or not. We test our model on two of the largest human pose datasets: NTURGB-D, and Human3.6M. We train on both single and multiple action types. The predictive power of our model for motion estimation is demonstrated by generating multiple plausible futures from the same input and showing the effect of each of the several loss functions in the ablation study. We also show the advantage of switching to GAN from WGAN-GP, which we used in our previous work. Furthermore, we show that it takes less than half the number of epochs to train an activity recognition network by using the features learned from the discriminator.
39

Automatic recognition of healthcare worker hand hygiene

Galluzzi, Valerie 01 August 2015 (has links)
Hand hygiene is an important part of preventing disease transmission in the hospital. Due to this importance, electronic systems have been proposed for automatically monitoring healthcare worker adherence to hand hygiene guidelines. However, these systems can miss certain hand hygiene events and do not include quality metrics such as duration or technique. We propose that hand hygiene duration and technique can be automatically inferred using the motion of the wrist. This work presents a system utilizing wrist-based 3-dimensional accelerometers and orientation sensors, signal processing (including novel features), and machine learning to detect healthcare worker hand hygiene and report quality metrics such as duration and whether the healthcare worker used recommended rubbing technique. We validated the system using several different types of data sets with up to 116 healthcare workers and activities ranging from synthetically generated hand hygiene movements to observation of healthcare worker hand hygiene on the hospital floor. In these experiments our system detects up to 98.4% of hand hygiene events, detects hand hygiene technique with up to 92.1% accuracy, and accurately estimates hand hygiene duration.
40

Human Activity Recognition Based on Transfer Learning

Pang, Jinyong 06 July 2018 (has links)
Human activity recognition (HAR) based on time series data is the problem of classifying various patterns. Its widely applications in health care owns huge commercial benefit. With the increasing spread of smart devices, people have strong desires of customizing services or product adaptive to their features. Deep learning models could handle HAR tasks with a satisfied result. However, training a deep learning model has to consume lots of time and computation resource. Consequently, developing a HAR system effectively becomes a challenging task. In this study, we develop a solid HAR system using Convolutional Neural Network based on transfer learning, which can eliminate those barriers.

Page generated in 0.2095 seconds