• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 107
  • 9
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 158
  • 158
  • 83
  • 65
  • 60
  • 44
  • 38
  • 25
  • 22
  • 21
  • 20
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Leveraging contextual cues for dynamic scene understanding

Bettadapura, Vinay Kumar 27 May 2016 (has links)
Environments with people are complex, with many activities and events that need to be represented and explained. The goal of scene understanding is to either determine what objects and people are doing in such complex and dynamic environments, or to know the overall happenings, such as the highlights of the scene. The context within which the activities and events unfold provides key insights that cannot be derived by studying the activities and events alone. \emph{In this thesis, we show that this rich contextual information can be successfully leveraged, along with the video data, to support dynamic scene understanding}. We categorize and study four different types of contextual cues: (1) spatio-temporal context, (2) egocentric context, (3) geographic context, and (4) environmental context, and show that they improve dynamic scene understanding tasks across several different application domains. We start by presenting data-driven techniques to enrich spatio-temporal context by augmenting Bag-of-Words models with temporal, local and global causality information and show that this improves activity recognition, anomaly detection and scene assessment from videos. Next, we leverage the egocentric context derived from sensor data captured from first-person point-of-view devices to perform field-of-view localization in order to understand the user's focus of attention. We demonstrate single and multi-user field-of-view localization in both indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. Next, we look at how geographic context can be leveraged to make challenging ``in-the-wild" object recognition tasks more tractable using the problem of food recognition in restaurants as a case-study. Finally, we study the environmental context obtained from dynamic scenes such as sporting events, which take place in responsive environments such as stadiums and gymnasiums, and show that it can be successfully used to address the challenging task of automatically generating basketball highlights. We perform comprehensive user-studies on 25 full-length NCAA games and demonstrate the effectiveness of environmental context in producing highlights that are comparable to the highlights produced by ESPN.
82

Human Activity Recognition Using Wearable Inertia Sensor Data adnd Machine Learning

Xiaoyu Yu (7043231) 16 August 2019 (has links)
Falling in indoor home setting can be dangerous for elderly population (in USA and globally), causing hospitalization, long term reduced mobility, disability or even death. Prevention of fall by monitoring different human activities or identifying the aftermath of fall has greater significance for elderly population. This is possible due to the availability and emergence of miniaturized sensors with advanced electronics and data analytics tools. This thesis aims at developing machine learning models to classify fall activities and non-fall activities. In this thesis, two types of neural networks with different parameters were tested for their capability in dealing with such tasks. A publicly available dataset was used to conduct the experiments. The two types of neural network models, convolution and recurrent neural network, were developed and evaluated. Convolution neural network achieved an accuracy of over 95% for classifying fall and non-fall activities. Recurrent neural network provided an accuracy of over 97% accuracy in predicting fall, non-fall and a third category activity (defined in this study as “pre/postcondition”). Both neural network models show high potential for being used in fall prevention and management activity. Moreover, two theoretical designs of fall detection systems were proposed in this thesis based on the developed convolution and recurrent neural networks.
83

Mesure de la fragilité et détection de chutes pour le maintien à domicile des personnes âgées / Measure of frailty and fall detection for helping elderly people to stay at home

Dubois, Amandine 15 September 2014 (has links)
Le vieillissement de la population est un enjeu majeur pour les prochaines années en raison, notamment, de l'augmentation du nombre de personnes dépendantes. La question du maintien à domicile de ces personnes se pose alors, du fait de l'impossibilité pour les instituts spécialisés de les accueillir toutes et, surtout, de la volonté des personnes âgées de rester chez elles le plus longtemps possible. Or, le développement de systèmes technologiques peut aider à résoudre certains problèmes comme celui de la sécurisation en détectant les chutes, et de l'évaluation du degré d'autonomie pour prévenir les accidents. Plus particulièrement, nous nous intéressons au développement des systèmes ambiants, peu coûteux, pour l'équipement du domicile. Les caméras de profondeur permettent d'analyser en temps réel les déplacements de la personne. Nous montrons dans cette thèse qu'il est possible de reconnaître l'activité de la personne et de mesurer des paramètres de sa marche à partir de l'analyse de caractéristiques simples extraites des images de profondeur. La reconnaissance d'activité est réalisée à partir des modèles de Markov cachés, et permet en particulier de détecter les chutes et des activités à risque. Lorsque la personne marche, l'analyse de la trajectoire du centre de masse nous permet de mesurer les paramètres spatio-temporels pertinents pour l'évaluation de la fragilité de la personne. Ce travail a été réalisé sur la base d'expérimentations menées en laboratoire, d'une part, pour la construction des modèles par apprentissage automatique et, d'autre part, pour évaluer la validité des résultats. Les expérimentations ont montré que certains modèles de Markov cachés, développés pour ce travail, sont assez robustes pour classifier les différentes activités. Nous donnons, également dans cette thèse, la précision, obtenue avec notre système, des paramètres de la marche en comparaison avec un tapis actimètrique. Nous pensons qu'un tel système pourrait facilement être installé au domicile de personnes âgées, car il repose sur un traitement local des images. Il fournit, au quotidien, des informations sur l'analyse de l'activité et sur l'évolution des paramètres de la marche qui sont utiles pour sécuriser et évaluer le degré de fragilité de la personne. / Population ageing is a major issue for society in the next years, especially because of the increase of dependent people. The limits in specialized institutes capacity and the wish of the elderly to stay at home as long as possible explain a growing need for new specific at home services. Technologies can help securing the person at home by detecting falls. They can also help in the evaluation of the frailty for preventing future accidents. This work concerns the development of low cost ambient systems for helping the stay at home of elderly. Depth cameras allow analysing in real time the displacement of the person. We show that it is possible to recognize the activity of the person and to measure gait parameters from the analysis of simple feature extracted from depth images. Activity recognition is based on Hidden Markov Models and allows detecting at risk behaviours and falls. When the person is walking, the analysis of the trajectory of her centre of mass allows measuring gait parameters that can be used for frailty evaluation. This work is based on laboratory experimentations for the acquisition of data used for models training and for the evaluation of the results. We show that some of the developed Hidden Markov Models are robust enough for classifying the activities. We also evaluate de precision of the gait parameters measurement in comparison to the measures provided by an actimetric carpet. We believe that such a system could be installed in the home of the elderly because it relies on a local processing of the depth images. It would be able to provide daily information on the person activity and on the evolution of her gait parameters that are useful for securing her and evaluating her frailty
84

Adapting robot behaviour in smart homes : a different approach using personas

Duque Garcia, Ismael January 2017 (has links)
A challenge in Human-Robot Interaction is tailoring the social skills of robot companions to match those expected by individual humans during their rst encounter. Currently, large amounts of user data are needed to con gure robot companions with these skills. This creates the need of running long-term Human-Robot Interaction studies in domestic environments. A new approach using personas is explored to alleviate this arduous data collection task without compromising the level of interaction currently shown by robot companions. The personas technique was created by Alan Cooper in 1999 as a tool to de ne user archetypes of a system in order to reduce the involvement of real users during the development process of a target system. This technique has proven bene cial in Human-Computer Interaction for years. Therefore, similar bene ts could be expected when applying personas to Human-Robot Interaction. Our novel approach de nes personas as the key component of a computational behaviour model used to adapt robot companions to individual user's needs. This approach reduces the amount of user data that must be collected before a Human-Robot Interaction study, by associating new users to pre-de ned personas that adapt the robot behaviours through their integration with the computational behaviour model. At the same time that the current robot social interaction level expected by humans during the rst encounter is preserved. The University of Hertfordshire Robot House provided the naturalistic domestic environment for the investigation. After incorporating a new module, an Activity Recognition System, to increase the overall context-awareness of the system, a computational behaviour model will be de ned through an iterative research process. The initial de nition of the model was evolved after each experiment based on the iii ndings. Two successive studies investigated personas and determined the steps to follow for their integration into the targeted model. The nal model presented was de ned from users' preferences and needs when interacting with a robot companion during activities of daily living at home. The main challenge was identifying the variables that match users to personas in our model. This approach open a new discussion in the Human-Robot Interaction eld to de ne tools that help reduce the amount of user data requiring collection prior to the rst interaction with a robot companion in a domestic environment. We conclude that modelling people's preferences when interacting with robot companions is a challenging approach. Integrating the Human-Computer Interaction technique into a computational behaviour model for Human-Robot Interaction studies was more di cult than anticipated. This investigation shows the advantages and disadvantages of introducing this technique into Human-Robot Interaction, and explores the challenges in de ning a personas-based computational behaviour model. The continuous learning process experienced helps clarify the steps that other researchers in the eld should follow when investigating a similar approach. Some interesting outcomes and trends were also found among users' data, which encourage the belief that the personas technique can be further developed to tackle some of the current di culties highlighted in the Human-Robot Interaction literature.
85

Economia de energia elétrica em ambientes inteligentes baseada no reconhecimento de atividades do usuário

Lima, Wesllen Sousa 05 March 2015 (has links)
Submitted by Kamila Costa (kamilavasconceloscosta@gmail.com) on 2015-06-11T20:40:36Z No. of bitstreams: 1 Dissertação-Wesllen S Lima.pdf: 2617450 bytes, checksum: 12694bdd075a47f1778193c5500476be (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-15T18:09:11Z (GMT) No. of bitstreams: 1 Dissertação-Wesllen S Lima.pdf: 2617450 bytes, checksum: 12694bdd075a47f1778193c5500476be (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-15T18:10:18Z (GMT) No. of bitstreams: 1 Dissertação-Wesllen S Lima.pdf: 2617450 bytes, checksum: 12694bdd075a47f1778193c5500476be (MD5) / Made available in DSpace on 2015-06-15T18:10:18Z (GMT). No. of bitstreams: 1 Dissertação-Wesllen S Lima.pdf: 2617450 bytes, checksum: 12694bdd075a47f1778193c5500476be (MD5) Previous issue date: 2015-03-05 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In recent years, power consumption has gradually increased in all sectors, especially in residential areas. This increase is mainly due to the emergence of new electrical appliances, for this reason, several solutions have been proposed by government and industry in order to minimize the energy consumption in homes. Among the proposed approaches, people's awareness, use of renewable energy sources and the creation of intelligent devices are highlighted . In addition, the use of Information and Communication Technologies (ICTs) in smart environments has been seen as an interesting alternative to deal with this problem. The idea is that the residences are instrumented with sensors and actuators in order to monitor people activities and, thereby, manage the power consumption based on their habits. In this context, this work proposes and validates a method to save energy through user activities in an intelligent environment using artificial intelligence techniques. The goal is to identify the devices related to user activities and make recommendations during their execution, avoiding waste. The proposed method, called AAEC (Activity-Appliance-Energy Consumption), is able to analyze a set of data collected from sensors available in the environment, after it recognizes user activities and recommends actions aimed at cost containment. Tests on a real database shown that the proposed method is able to save up to 35% of electricity. In general, the inclusion of AAEC method was a good solution to help people save energy without effort on individual behavior changes, contributing to the conscious use of energy and to the development of a sustainable society. / Nos últimos anos, o consumo de energia elétrica tem aumentado gradativamente em todos os setores, especialmente em ambientes residenciais. Esse aumento ocorre, principalmente, devido ao surgimento de novos aparelhos elétricos. Por este motivo, várias soluções têm sido propostas pelo governo e pela indústria na tentativa de minimizar o consumo de energia elétrica em residências. Dentre as abordagens propostas, destacam-se a conscientização das pessoas, uso de fontes de energia renováveis e a criação de aparelhos inteligentes. Além disso, o uso de Tecnologias da Informação e Comunicação (TICs) em ambientes inteligentes tem sido visto como uma alternativa interessante para lidar com este problema. A ideia é que as residências sejam instrumentadas com sensores e atuadores com objetivo de monitorar as atividades das pessoas e, por meio disso, gerenciar o consumo de energia elétrica com base nos seus hábitos. Nesse contexto, este trabalho propõe e valida um método capaz de economizar energia com base nas atividades dos usuários em um ambiente inteligente utilizando técnicas de inteligência artificial. O objetivo é identificar os aparelhos relacionados às atividades dos usuários e fazer recomendações ao longo da execução dessas atividades, evitando tais desperdícios. O método proposto, denominado de AAEC (Activity-Appliance-Energy Consumption), é capaz de analisar um conjunto de dados coletados pelos sensores disponíveis no ambiente, reconhecer automaticamente as atividades dos usuários e recomendar ações que visam a contenção de gastos. Testes feitos com uma base de dados real mostram que o método proposto é capaz de economizar até 35% de energia elétrica. De maneira geral, a inclusão do método AAEC se mostrou uma boa solução para auxiliar as pessoas a poupar energia sem que haja esforço na mudança de comportamento do indivíduo, contribuindo para o uso consciente de energia e no desenvolvimento de uma sociedade sustentável.
86

Human Activity Recognition and Control of Wearable Robots

January 2018 (has links)
abstract: Wearable robotics has gained huge popularity in recent years due to its wide applications in rehabilitation, military, and industrial fields. The weakness of the skeletal muscles in the aging population and neurological injuries such as stroke and spinal cord injuries seriously limit the abilities of these individuals to perform daily activities. Therefore, there is an increasing attention in the development of wearable robots to assist the elderly and patients with disabilities for motion assistance and rehabilitation. In military and industrial sectors, wearable robots can increase the productivity of workers and soldiers. It is important for the wearable robots to maintain smooth interaction with the user while evolving in complex environments with minimum effort from the user. Therefore, the recognition of the user's activities such as walking or jogging in real time becomes essential to provide appropriate assistance based on the activity. This dissertation proposes two real-time human activity recognition algorithms intelligent fuzzy inference (IFI) algorithm and Amplitude omega ($A \omega$) algorithm to identify the human activities, i.e., stationary and locomotion activities. The IFI algorithm uses knee angle and ground contact forces (GCFs) measurements from four inertial measurement units (IMUs) and a pair of smart shoes. Whereas, the $A \omega$ algorithm is based on thigh angle measurements from a single IMU. This dissertation also attempts to address the problem of online tuning of virtual impedance for an assistive robot based on real-time gait and activity measurement data to personalize the assistance for different users. An automatic impedance tuning (AIT) approach is presented for a knee assistive device (KAD) in which the IFI algorithm is used for real-time activity measurements. This dissertation also proposes an adaptive oscillator method known as amplitude omega adaptive oscillator ($A\omega AO$) method for HeSA (hip exoskeleton for superior augmentation) to provide bilateral hip assistance during human locomotion activities. The $A \omega$ algorithm is integrated into the adaptive oscillator method to make the approach robust for different locomotion activities. Experiments are performed on healthy subjects to validate the efficacy of the human activities recognition algorithms and control strategies proposed in this dissertation. Both the activity recognition algorithms exhibited higher classification accuracy with less update time. The results of AIT demonstrated that the KAD assistive torque was smoother and EMG signal of Vastus Medialis is reduced, compared to constant impedance and finite state machine approaches. The $A\omega AO$ method showed real-time learning of the locomotion activities signals for three healthy subjects while wearing HeSA. To understand the influence of the assistive devices on the inherent dynamic gait stability of the human, stability analysis is performed. For this, the stability metrics derived from dynamical systems theory are used to evaluate unilateral knee assistance applied to the healthy participants. / Dissertation/Thesis / Doctoral Dissertation Aerospace Engineering 2018
87

SENSOR-BASED HUMAN ACTIVITY RECOGNITION USING BIDIRECTIONAL LSTM FOR CLOSELY RELATED ACTIVITIES

Pavai, Arumugam Thendramil 01 December 2018 (has links)
Recognizing human activities using deep learning methods has significance in many fields such as sports, motion tracking, surveillance, healthcare and robotics. Inertial sensors comprising of accelerometers and gyroscopes are commonly used for sensor based HAR. In this study, a Bidirectional Long Short-Term Memory (BLSTM) approach is explored for human activity recognition and classification for closely related activities on a body worn inertial sensor data that is provided by the UTD-MHAD dataset. The BLSTM model of this study could achieve an overall accuracy of 98.05% for 15 different activities and 90.87% for 27 different activities performed by 8 persons with 4 trials per activity per person. A comparison of this BLSTM model is made with the Unidirectional LSTM model. It is observed that there is a significant improvement in the accuracy for recognition of all 27 activities in the case of BLSTM than LSTM.
88

A Simulator Tool for Human Activity Recognition

Westholm, Erik January 2010 (has links)
<p>The goal of this project was to create a simulator that was to produce data for research in the field of activity recognition. The simulator was to simulate a human entity moving around in, and interacting with, a PEIS environment. This simulator ended up being based on The Sims 3, and how this was done is described. The reader is expected to have some experience with programming.</p>
89

Representing and Recognizing Temporal Sequences

Shi, Yifan 15 August 2006 (has links)
Activity recognition falls in general area of pattern recognition, but it resides mainly in temporal domain which leads to distinctive characteristics. We provide an extensive survey over existing tools including FSM, HMM, BNT, DBN, SCFG and Symbolic Network Approach (PNF-network). These tools are inefficient to meet many of the requirements of activity recognition, leading to this work to develop a new graphical model: Propagation Net (P-Net). Many activities can be represented by a partially ordered set of temporal intervals, each of which corresponds to a primitive motion. Each interval has both temporal and logical constraints that control the duration of the interval and its relationship with other intervals. P-Net takes advantage of such fundamental constraints that it provides an graphical conceptual model to describe the human knowledge and an efficient computational model to facilitate recognition and learning. P-Nets define an exponentially large joint distribution that standard bayesian inference cannot handle. We devise two approximation algorithms to interpret a multi-dimensional observation sequence of evidence as a multi-stream propagation process through P-Net. First, Local Maximal Search Algorithm (LMSA) is constructed with polynomial complexity; Second, we introduce a particle filter based framework, Discrete Condensation (D-Condensation) algorithm, which samples the discrete state space more efficiently then original Condensation. To construct a P-Net based system, we need two parts: P-Net and the corresponding detector set. Given topology information and detector library, P-Net parameters can be extracted easily from a relatively small number of positive examples. To avoid the tedious process of manually constructing the detector library, we introduce semi-supervised learning framework to build P-Net and the corresponding detectors together. Furthermore, we introduce the Contrast Boosting algorithm that forces the detectors to be as different as possible but not necessary to be non-overlapping. The classification and learning ability of P-Nets are verified on three data sets: 1)vision tracked indoor activity data set; 2)vision tracked glucose monitor calibration data set; 3)sensor data set on simple weight-lifting exercise. Comparison with standard SCFG and HMM prove a P-Net based system is easier to construct and has a superior ability to classify complex human activity and detect anomaly.
90

Child's play: activity recognition for monitoring children's developmental progress with augmented toys

Westeyn, Tracy Lee 20 May 2010 (has links)
The way in which infants play with objects can be indicative of their developmental progress and may serve as an early indicator for developmental delays. However, the observation of children interacting with toys for the purpose of quantitative analysis can be a difficult task. To better quantify how play may serve as an early indicator, researchers have conducted retrospective studies examining the differences in object play behaviors among infants. However, such studies require that researchers repeatedly inspect videos of play often at speeds much slower than real-time to indicate points of interest. The research presented in this dissertation examines whether a combination of sensors embedded within toys and automatic pattern recognition of object play behaviors can help expedite this process. For my dissertation, I developed the Child'sPlay system which uses augmented toys and statistical models to automatically provide quantitative measures of object play interactions, as well as, provide the PlayView interface to view annotated play data for later analysis. In this dissertation, I examine the hypothesis that sensors embedded in objects can provide sufficient data for automatic recognition of certain exploratory, relational, and functional object play behaviors in semi-naturalistic environments and that a continuum of recognition accuracy exists which allows automatic indexing to be useful for retrospective review. I designed several augmented toys and used them to collect object play data from more than fifty play sessions. I conducted pattern recognition experiments over this data to produce statistical models that automatically classify children's object play behaviors. In addition, I conducted a user study with twenty participants to determine if annotations automatically generated from these models help improve performance in retrospective review tasks. My results indicate that these statistical models increase user performance and decrease perceived effort when combined with the PlayView interface during retrospective review. The presence of high quality annotations are preferred by users and promotes an increase in the effective retrieval rates of object play behaviors.

Page generated in 0.1074 seconds