• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 8
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 69
  • 30
  • 29
  • 21
  • 19
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Handwritten signature verification using locally optimized distance-based classification.

Moolla, Yaseen. 28 November 2013 (has links)
Although handwritten signature verification has been extensively researched, it has not achieved optimum accuracy rate. Therefore, efficient and accurate signature verification techniques are required since signatures are still widely used as a means of personal verification. This research work presents efficient distance-based classification techniques as an alternative to supervised learning classification techniques (SLTs). Two different feature extraction techniques were used, namely the Enhanced Modified Direction Feature (EMDF) and the Local Directional Pattern feature (LDP). These were used to analyze the effect of using several different distance-based classification techniques. Among the classification techniques used, are the cosine similarity measure, Mahalanobis, Canberra, Manhattan, Euclidean, weighted Euclidean and fractional distances. Additionally, the novel weighted fractional distances, as well as locally optimized resampling of feature vector sizes were tested. The best accuracy was achieved through applying a combination of the weighted fractional distances and locally optimized resampling classification techniques to the Local Directional Pattern feature extraction. This combination of multiple distance-based classification techniques achieved accuracy rate of 89.2% when using the EMDF feature extraction technique, and 90.8% when using the LDP feature extraction technique. These results are comparable to those in literature, where the same feature extraction techniques were classified with SLTs. The best of the distance-based classification techniques were found to produce greater accuracy than the SLTs. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2012.
22

Trajectory Analytics

Santiteerakul, Wasana 05 1900 (has links)
The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
23

Human Activity Recognition : Deep learning techniques for an upper body exercise classification system

Nardi, Paolo January 2019 (has links)
Most research behind the use of Machine Learning models in the field of Human Activity Recognition focuses mainly on the classification of daily human activities and aerobic exercises. In this study, we focus on the use of 1 accelerometer and 2 gyroscope sensors to build a Deep Learning classifier to recognise 5 different strength exercises, as well as a null class. The strength exercises tested in this research are as followed: Bench press, bent row, deadlift, lateral rises and overhead press. The null class contains recordings of daily activities, such as sitting or walking around the house. The model used in this paper consists on the creation of consecutive overlapping fixed length sliding windows for each exercise, which are processed separately and act as the input for a Deep Convolutional Neural Network. In this study we compare different sliding windows lengths and overlap percentages (step sizes) to obtain the optimal window length and overlap percentage combination. Furthermore, we explore the accuracy results between 1D and 2D Convolutional Neural Networks. Cross validation is also used to check the overall accuracy of the classifiers, where the database used in this paper contains 5 exercises performed by 3 different users and a null class. Overall the models were found to perform accurately for window’s with length of 0.5 seconds or greater and provided a solid foundation to move forward in the creation of a more robust fully integrated model that can recognize a wider variety of exercises.
24

Elderly activity recognition using smartphones and wearable devices / Reconhecimento de atividades de pessoas idosas com smartphone e dispositivos vestíveis

Zimmermann, Larissa Cardoso 13 February 2019 (has links)
Research that involves human-beings depends on the data collection. As technology solutions become popular in the context of healthcare, researchers highlight the need for monitoring and caring patients in situ. Human Activity Recognition (HAR) is a research field that combines two areas: Ubiquitous Computing and Artificial Intelligence. HAR is daily applied in several service sectors including military, security (surveillance), health and entertainment. A HAR system aims to identify and recognize the activities and actions a user performs, in real time or not. Ambient sensors (e.g. cameras) and wearable devices (e.g. smartwatches) collect information about users and their context (e.g. localization, time, companions). This data is processed by machine learning algorithms that extract information and classify the corresponding activity. Although there are several works in the literature related to HAR systems, most studies focusing on elderly users are limited and do not use, as ground truth, data collected from elder volunteers. Databases and sensors reported in the literature are geared towards a generic audience, which leads to loss in accuracy and robustness when targeted at a specific audience. Considering this gap, this work presents a Human Activity Recognition system and corresponding database focusing on the elderly, raising requirements and guidelines for supportive HAR system and the selection of sensor devices. The system evaluation was carried out checking the accuracy of the activity recognition process, defining the best statistical features and classification algorithms for the Elderly Activity Recognition System (EARS). The results suggest that EARS is a promising supportive technology for the elderly, having an accuracy of 98.37% with KNN (k = 1). / Pesquisas e serviços no campo da saúde se valem da coleta, em tempo real ou não, de dados de ordem física, psicológica, sentimental, comportamental, entre outras, de pacientes ou participantes em experimentos: o objetivo é melhorar tratamentos e procedimentos. As soluções tecnológicas estão se tornando populares no contexto da saúde, pesquisadores da área de saúde destacam a necessidade de monitoramento e cuidado dos pacientes in situ. O campo de pesquisa de Reconhecimento de Atividade Humana (sigla em inglês HAR, Human Activity Recognition) envolve as áreas de computação ubíqua e de inteligência artificial, sendo aplicado nos mais diversos domínios. Com o uso de sensores como câmeras, microfones e acelerômetros, entre outros, um sistema HAR tem por tarefa identificar as atividades que uma pessoa realiza em um determinado momento. As informações coletadas pelos sensores e os dados do usuário são processados por algoritmos de aprendizado de máquina para identificar a atividade humana realizada. Apesar de existirem vários trabalhos na literatura de sistemas HAR, poucos são voltados para o público ancião. Bases de dados e sensores reportados em trabalhos relacionados são voltadas para um público genérico, perdendo precisão e robustez quando se trata de um público específico. Diante dessa lacuna, este trabalho propõe um sistema de Reconhecimento de Atividade Humana voltado para o idoso, levantando requisitos para o sistema HAR assistido e selecionando os dispositivos sensores. Um banco de dados HAR com dados coletados de voluntários mais velhos também é fornecido e disponibilizado. A avaliação do sistema foi realizada verificando a acurácia do processo de reconhecimento da atividade, definindo as melhores características estatísticas e algoritmos de classificação para o sistema de reconhecimento de atividades do idoso. Os resultados sugerem que esse sistema é uma tecnologia de suporte promissora para idosos, tendo uma acurácia de 98.37% com KNN (k = 1).
25

Characterising the relationship between practice and laboratory-based studies of designers for critical design situations

Cash, Philip January 2012 (has links)
Experimental study of the designer plays a critical role in design research. However laboratory based study is often poorly compared and contrasted to practice, leading to a lack of uptake and subsequent research impact. The importance of addressing this issue is highlighted by its significant influence on design research and many related fields. As such the main aim of this work is to improve empirical design research by characterising the relationship between practice and laboratory-based studies for critical design situations. A review of the state of the art methods in design research and key related fields is reported. This highlights the importance and commonality of a set or core issues connected to the failure to effectively link study of practice and study in the laboratory. Further to this a technical review and scoping was carried out to establish the most efective capture strategy to be used when studying the designer empirically. Subsequently three studies are reported, forming a three point comparison between practice the laboratory (with student practitioners) and an intermediary case (a laboratory with practitioners) . Results from these studies contextualise the critical situations in practice and develop a detailed multi-level comparison between practice and the laboratory which was then validated with respect to a number of existing studies. The primary contribution of this thesis is the development of a detailed multi-level relationship between practice and the laboratory for critical design situations: information seeking, ideation and design review. The second key contribution is the development of a generic method for the empirical study of designers in varying contexts - allowing researchers to build on this work and more effectively link diverse studies together. The final key contribution of this work is the identification of a number of core methodological issues and mitigating techniques affecting both design research and its related fields.
26

Human Motion Anticipation and Recognition from RGB-D

Barsoum, Emad January 2019 (has links)
Predicting and understanding the dynamic of human motion has many applications such as motion synthesis, augmented reality, security, education, reinforcement learning, autonomous vehicles, and many others. In this thesis, we create a novel end-to-end pipeline that can predict multiple future poses from the same input, and, in addition, can classify the entire sequence. Our focus is on the following two aspects of human motion understanding: Probabilistic human action prediction: Given a sequence of human poses as input, we sample multiple possible future poses from the same input sequence using a new GAN-based network. Human motion understanding: Given a sequence of human poses as input, we classify the actual action performed in the sequence and improve the classification performance using the presentation learned from the prediction network. We also demonstrate how to improve model training from noisy labels, using facial expression recognition as an example. More specifically, we have 10 taggers to label each input image, and compare four different approaches: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss. We show that the traditional majority voting scheme does not perform as well as the last two approaches that fully leverage the label distribution. We shared the enhanced FER+ data set with multiple labels for each face image with the research community (https://github.com/Microsoft/FERPlus). For predicting and understanding of human motion, we propose a novel sequence-to-sequence model trained with an improved version of generative adversarial networks (GAN). Our model, which we call HP-GAN2, learns a probability density function of future human poses conditioned on previous poses. It predicts multiple sequences of possible future human poses, each from the same input sequence but seeded with a different vector z drawn from a random distribution. Moreover, to quantify the quality of the non-deterministic predictions, we simultaneously train a motion-quality-assessment model that learns the probability that a given skeleton pose sequence is a real or fake human motion. In order to classify the action performed in a video clip, we took two approaches. In the first approach, we train on a sequence of skeleton poses from scratch using random parameters initialization with the same network architecture used in the discriminator of the HP-GAN2 model. For the second approach, we use the discriminator of the HP-GAN2 network, extend it with an action classification branch, and fine tune the end-to-end model on the classification tasks, since the discriminator in HP-GAN2 learned to differentiate between fake and real human motion. So, our hypothesis is that if the discriminator network can differentiate between synthetic and real skeleton poses, then it also has learned some of the dynamics of a real human motion, and that those dynamics are useful in classification as well. We will show through multiple experiments that that is indeed the case. Therefore, our model learns to predict multiple future sequences of human poses from the same input sequence. We also show that the discriminator learns a general representation of human motion by using the learned features in an action recognition task. And we train a motion-quality-assessment network that measure the probability of a given sequence of poses are valid human poses or not. We test our model on two of the largest human pose datasets: NTURGB-D, and Human3.6M. We train on both single and multiple action types. The predictive power of our model for motion estimation is demonstrated by generating multiple plausible futures from the same input and showing the effect of each of the several loss functions in the ablation study. We also show the advantage of switching to GAN from WGAN-GP, which we used in our previous work. Furthermore, we show that it takes less than half the number of epochs to train an activity recognition network by using the features learned from the discriminator.
27

Human Activity Recognition and Prediction using RGBD Data

Coen, Paul Dixon 01 August 2019 (has links)
Being able to predict and recognize human activities is an essential element for us to effectively communicate with other humans during our day to day activities. A system that is able to do this has a number of appealing applications, from assistive robotics to health care and preventative medicine. Previous work in supervised video-based human activity prediction and detection fails to capture the richness of spatiotemporal data that these activities generate. Convolutional Long short-term memory (Convolutional LSTM) networks are a useful tool in analyzing this type of data, showing good results in many other areas. This thesis’ focus is on utilizing RGB-D Data to improve human activity prediction and recognition. A modified Convolutional LSTM network is introduced to do so. Experiments are performed on the network and are compared to other models in-use as well as the current state-of-the-art system. We show that our proposed model for human activity prediction and recognition outperforms the current state-of-the-art models in the CAD-120 dataset without giving bounding frames or ground-truths about objects.
28

Human Activity Recognition Based on Transfer Learning

Pang, Jinyong 06 July 2018 (has links)
Human activity recognition (HAR) based on time series data is the problem of classifying various patterns. Its widely applications in health care owns huge commercial benefit. With the increasing spread of smart devices, people have strong desires of customizing services or product adaptive to their features. Deep learning models could handle HAR tasks with a satisfied result. However, training a deep learning model has to consume lots of time and computation resource. Consequently, developing a HAR system effectively becomes a challenging task. In this study, we develop a solid HAR system using Convolutional Neural Network based on transfer learning, which can eliminate those barriers.
29

Capturing human activity based on the city structure : A Space syntax case study in urban pedestrian movement

Luther, Gustav January 2020 (has links)
In this paper, the Swedish cities Gävle and Göteborg are compared regarding how well activity-based hotspots can capture pedestrian movement. The aim with this paper is to produce a further understanding about how the built environment affects human activities, as well as applying new methods for analyzing big geospatial data. The project is carried out with user generated travel data that comes from the company Trivector and their app TravelVu. Space syntax theory and methods are applied to the street networks to investigate if there are any correlations between the connectivity and the number of travels per street, which in turn is based on natural streets. The results indicate that there is a correlation between connectivity and number of travels per street. But with the use of naturally generated hotspots that are based on human activity, the correlation increases greatly, which imply that in areas with high human activity the connectivity of streets better captures the human movement than in areas with low activity.
30

Human Activity Recognition and Pathological Gait Pattern Identification

Niu, Feng 14 December 2007 (has links)
Human activity analysis has attracted great interest from computer vision researchers due to its promising applications in many areas such as automated visual surveillance, computer-human interactions, and motion-based identification and diagnosis. This dissertation presents work in two areas: general human activity recognition from video, and human activity analysis for the purpose of identifying pathological gait from both 3D captured data and from video. Even though the research in human activity recognition has been going on for many years, still there are many issues that need more research. This includes the effective representation and modeling of human activities and the segmentation of sequences of continuous activities. In this thesis we present an algorithm that combines shape and motion features to represent human activities. In order to handle the activity recognition from any viewing angle we quantize the viewing direction and build a set of Hidden Markov Models (HMMs), where each model represents the activity from a given view. Finally, a voting based algorithm is used to segment and recognize a sequence of human activities from video. Our method of representing activities has good attributes and is suitable for both low resolution and high resolution video. The voting based algorithm performs the segmentation and recognition simultaneously. Experiments on two sets of video clips of different activities show that our method is effective. Our work on identifying pathological gait is based on the assumption of gait symmetry. Previous work on gait analysis measures the symmetry of gait based on Ground Reaction Force data, stance time, swing time or step length. Since the trajectories of the body parts contain information about the whole body movement, we measure the symmetry of the gait based on the trajectories of the body parts. Two algorithms, which can work with different data sources, are presented. The first algorithm works on 3D motion-captured data and the second works on video data. Both algorithms use support vector machine (SVM) for classification. Each of the two methods has three steps: the first step is data preparation, i.e., obtaining the trajectories of the body parts; the second step is gait representation based on a measure of gait symmetry; and the last step is SVM based classification. For 3D motion-captured data, a set of features based on Discrete Fourier Transform (DFT) is used to represent the gait. We demonstrate the accuracy of the classification by a set of experiments that shows that the method for 3D motion-captured data is highly effective. For video data, a model based tracking algorithm for human body parts is developed for preparing the data. Then, a symmetry measure that works on the sequence of 2D data, i.e. sequence of video frames, is derived to represent the gait. We performed experiments on both 2D projected data and real video data to examine this algorithm. The experimental results on 2D projected data showed that the presented algorithm is promising for identifying pathological gait from video. The experimental results on the real video data are not good as the results on 2D projected data. We believe that better results could be obtained if the accuracy of the tracking algorithm is improved.

Page generated in 0.094 seconds