• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 8
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 69
  • 30
  • 29
  • 21
  • 19
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Human Activity Recognition and Control of Wearable Robots

January 2018 (has links)
abstract: Wearable robotics has gained huge popularity in recent years due to its wide applications in rehabilitation, military, and industrial fields. The weakness of the skeletal muscles in the aging population and neurological injuries such as stroke and spinal cord injuries seriously limit the abilities of these individuals to perform daily activities. Therefore, there is an increasing attention in the development of wearable robots to assist the elderly and patients with disabilities for motion assistance and rehabilitation. In military and industrial sectors, wearable robots can increase the productivity of workers and soldiers. It is important for the wearable robots to maintain smooth interaction with the user while evolving in complex environments with minimum effort from the user. Therefore, the recognition of the user's activities such as walking or jogging in real time becomes essential to provide appropriate assistance based on the activity. This dissertation proposes two real-time human activity recognition algorithms intelligent fuzzy inference (IFI) algorithm and Amplitude omega ($A \omega$) algorithm to identify the human activities, i.e., stationary and locomotion activities. The IFI algorithm uses knee angle and ground contact forces (GCFs) measurements from four inertial measurement units (IMUs) and a pair of smart shoes. Whereas, the $A \omega$ algorithm is based on thigh angle measurements from a single IMU. This dissertation also attempts to address the problem of online tuning of virtual impedance for an assistive robot based on real-time gait and activity measurement data to personalize the assistance for different users. An automatic impedance tuning (AIT) approach is presented for a knee assistive device (KAD) in which the IFI algorithm is used for real-time activity measurements. This dissertation also proposes an adaptive oscillator method known as amplitude omega adaptive oscillator ($A\omega AO$) method for HeSA (hip exoskeleton for superior augmentation) to provide bilateral hip assistance during human locomotion activities. The $A \omega$ algorithm is integrated into the adaptive oscillator method to make the approach robust for different locomotion activities. Experiments are performed on healthy subjects to validate the efficacy of the human activities recognition algorithms and control strategies proposed in this dissertation. Both the activity recognition algorithms exhibited higher classification accuracy with less update time. The results of AIT demonstrated that the KAD assistive torque was smoother and EMG signal of Vastus Medialis is reduced, compared to constant impedance and finite state machine approaches. The $A\omega AO$ method showed real-time learning of the locomotion activities signals for three healthy subjects while wearing HeSA. To understand the influence of the assistive devices on the inherent dynamic gait stability of the human, stability analysis is performed. For this, the stability metrics derived from dynamical systems theory are used to evaluate unilateral knee assistance applied to the healthy participants. / Dissertation/Thesis / Doctoral Dissertation Aerospace Engineering 2018
62

SENSOR-BASED HUMAN ACTIVITY RECOGNITION USING BIDIRECTIONAL LSTM FOR CLOSELY RELATED ACTIVITIES

Pavai, Arumugam Thendramil 01 December 2018 (has links)
Recognizing human activities using deep learning methods has significance in many fields such as sports, motion tracking, surveillance, healthcare and robotics. Inertial sensors comprising of accelerometers and gyroscopes are commonly used for sensor based HAR. In this study, a Bidirectional Long Short-Term Memory (BLSTM) approach is explored for human activity recognition and classification for closely related activities on a body worn inertial sensor data that is provided by the UTD-MHAD dataset. The BLSTM model of this study could achieve an overall accuracy of 98.05% for 15 different activities and 90.87% for 27 different activities performed by 8 persons with 4 trials per activity per person. A comparison of this BLSTM model is made with the Unidirectional LSTM model. It is observed that there is a significant improvement in the accuracy for recognition of all 27 activities in the case of BLSTM than LSTM.
63

Factors affecting the management of Muntjac Deer (Muntiacus muntjak) in Bali Barat National Park, Indonesia

Oka, Gusti Made, University of Western Sydney, Hawkesbury, Faculty of Environmental Management and Agriculture January 1998 (has links)
The principal aim of the study which was conducted between May 1995 and May 1997 was to collect and analyze information that would be considered vital to any future management actions that may be applied to the deer living in the wild in the Bali Barat National Park ecosystem in Indonesia. The systems approach used sought to analyze the complex interactions between the soil, plant, animal and human activity subsystems. In particular, interaction between Rusa deer and Muntjac deer was compared where possible, although the principal focus of the study was on the population of Muntjac deer. The soils in habitats frequented by deer in Bali Barat National Park were found to be of relatively low fertility status. Chemical analysis of the soil revealed that all of the mineral element contents considered in this study were in the lowest range for soils, in general. During this study the population of Muntjac deer in the Bali Barat National Park was submitted to phylogenetic analysis to determine whether the Bali population is distinct. Preliminary results indicate that these deer are apart of a diverse, but monophyletic group of Muntiacus Muntjac. The potential unique status of Muntjac deer in Bali Barat National Park, and the need to preserve them as part of the natural resource base that constitutes the Indonesian archipelago increased the importance of this study of the ecosystem and social system surrounding Bali Barat National Park. / Doctor of Philosophy (PhD)
64

Child's play: activity recognition for monitoring children's developmental progress with augmented toys

Westeyn, Tracy Lee 20 May 2010 (has links)
The way in which infants play with objects can be indicative of their developmental progress and may serve as an early indicator for developmental delays. However, the observation of children interacting with toys for the purpose of quantitative analysis can be a difficult task. To better quantify how play may serve as an early indicator, researchers have conducted retrospective studies examining the differences in object play behaviors among infants. However, such studies require that researchers repeatedly inspect videos of play often at speeds much slower than real-time to indicate points of interest. The research presented in this dissertation examines whether a combination of sensors embedded within toys and automatic pattern recognition of object play behaviors can help expedite this process. For my dissertation, I developed the Child'sPlay system which uses augmented toys and statistical models to automatically provide quantitative measures of object play interactions, as well as, provide the PlayView interface to view annotated play data for later analysis. In this dissertation, I examine the hypothesis that sensors embedded in objects can provide sufficient data for automatic recognition of certain exploratory, relational, and functional object play behaviors in semi-naturalistic environments and that a continuum of recognition accuracy exists which allows automatic indexing to be useful for retrospective review. I designed several augmented toys and used them to collect object play data from more than fifty play sessions. I conducted pattern recognition experiments over this data to produce statistical models that automatically classify children's object play behaviors. In addition, I conducted a user study with twenty participants to determine if annotations automatically generated from these models help improve performance in retrospective review tasks. My results indicate that these statistical models increase user performance and decrease perceived effort when combined with the PlayView interface during retrospective review. The presence of high quality annotations are preferred by users and promotes an increase in the effective retrieval rates of object play behaviors.
65

Energy Efficient Context-Aware Framework in Mobile Sensing

Yurur, Ozgur 01 January 2013 (has links)
The ever-increasing technological advances in embedded systems engineering, together with the proliferation of small-size sensor design and deployment, have enabled mobile devices (e.g., smartphones) to recognize daily occurring human based actions, activities and interactions. Therefore, inferring a vast variety of mobile device user based activities from a very diverse context obtained by a series of sensory observations has drawn much interest in the research area of ubiquitous sensing. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users, and this allows network services to respond proactively and intelligently based on such awareness. Hence, with the evolution of smartphones, software developers are empowered to create context aware applications for recognizing human-centric or community based innovative social and cognitive activities in any situation and from anywhere. This leads to the exciting vision of forming a society of ``Internet of Things" which facilitates applications to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network which is capable of making autonomous logical decisions to actuate environmental objects. More significantly, it is believed that introducing the intelligence and situational awareness into recognition process of human-centric event patterns could give a better understanding of human behaviors, and it also could give a chance for proactively assisting individuals in order to enhance the quality of lives. Mobile devices supporting emerging computationally pervasive applications will constitute a significant part of future mobile technologies by providing highly proactive services requiring continuous monitoring of user related contexts. However, the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth as compared to the capabilities of PCs and servers. Above all, power concerns are major restrictions standing up to implementation of context-aware applications. These requirements unfortunately shorten device battery lifetimes due to high energy consumption caused by both sensor and processor operations. Specifically, continuously capturing user context through sensors imposes heavy workloads in hardware and computations, and hence drains the battery power rapidly. Therefore, mobile device batteries do not last a long time while operating sensor(s) constantly. In addition to that, the growing deployment of sensor technologies in mobile devices and innumerable software applications utilizing sensors have led to the creation of a layered system architecture (i.e., context aware middleware) so that the desired architecture can not only offer a wide range of user-specific services, but also respond effectively towards diversity in sensor utilization, large sensory data acquisitions, ever-increasing application requirements, pervasive context processing software libraries, mobile device based constraints and so on. Due to the ubiquity of these computing devices in a dynamic environment where the sensor network topologies actively change, it yields applications to behave opportunistically and adaptively without a priori assumptions in response to the availability of diverse resources in the physical world as well as in response to scalability, modularity, extensibility and interoperability among heterogeneous physical hardware. In this sense, this dissertation aims at proposing novel solutions to enhance the existing tradeoffs in mobile sensing between accuracy and power consumption while context is being inferred under the intrinsic constraints of mobile devices and around the emerging concepts in context-aware middleware framework.
66

A learning-based computer vision approach for the inference of articulated motion = Ein lernbasierter computer-vision-ansatz für die erkennung artikulierter bewegung /

Curio, Cristóbal. January 1900 (has links)
Dissertation--Ruhr-Universität, Bochum, 2004. / Includes bibliographical references (p. 179-187).
67

Sistema embarcado empregado no reconhecimento de atividades humanas /

Ferreira, Willian de Assis Pedrobon January 2017 (has links)
Orientador: Alexandre César Rodrigues da Silva / Resumo: A utilização de sensores em ambientes inteligentes é fundamental para supervisionar as atividades dos seres humanos. No reconhecimento de atividades humanas, ou HAR (Human Activity Recognition), técnicas de supervisionamento são aplicadas para identificar as atividades realizadas em diversas aplicações, como no esporte e no acompanhamento de pessoas com necessidades especiais. O Sistema de Reconhecimento de Atividades Humanas (SIRAH) é empregado no reconhecimento de atividades humanas, utilizando um acelerômetro localizado na cintura da pessoa monitorada e uma Rede Neural Artificial para classificar sete atividades: em pé, deitado, sentado, caminhar, correr, sentar e levantar. Originalmente implementado no software MATLAB, realizava classificações offline em que os resultados não eram obtidos durante a execução das atividades. Apresenta-se, neste trabalho, o desenvolvimento de duas versões embarcadas do SIRAH, que executam o algoritmo de classificação durante a prática das atividades monitoradas. A primeira implementação foi efetuada no processador Nios II da Altera, que ofereceu a mesma exatidão do sistema offline com processamento limitado, pois o software consome 673 milissegundos para executar a classificação desejada. Para aprimorar o desempenho, outra versão foi implementada em FPGA utilizando a linguagem de descrição de hardware VHDL. O algoritmo de classificação opera em tempo real e é executado em apenas 236 microssegundos, garantindo total amostragem das acelerações... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
68

Modèles profonds de régression et applications à la vision par ordinateur pour l'interaction homme-robot / Deep Regression Models and Computer Vision Applications for Multiperson Human-Robot Interaction

Lathuiliere, Stéphane 22 May 2018 (has links)
Dans le but d’interagir avec des êtres humains, les robots doivent effectuer destâches de perception basique telles que la détection de visage, l’estimation dela pose des personnes ou la reconnaissance de la parole. Cependant, pour interagir naturellement, avec les hommes, le robot doit modéliser des conceptsde haut niveau tels que les tours de paroles dans un dialogue, le centre d’intérêtd’une conversion, ou les interactions entre les participants. Dans ce manuscrit,nous suivons une approche ascendante (dite “top-down”). D’une part, nousprésentons deux méthodes de haut niveau qui modélisent les comportementscollectifs. Ainsi, nous proposons un modèle capable de reconnatre les activitésqui sont effectuées par différents des groupes de personnes conjointement, telsque faire la queue, discuter. Notre approche gère le cas général où plusieursactivités peuvent se dérouler simultanément et en séquence. D’autre part,nous introduisons une nouvelle approche d’apprentissage par renforcement deréseau de neurones pour le contrôle de la direction du regard du robot. Notreapproche permet à un robot d’apprendre et d’adapter sa stratégie de contrôledu regard dans le contexte de l’interaction homme-robot. Le robot est ainsicapable d’apprendre à concentrer son attention sur des groupes de personnesen utilisant seulement ses propres expériences (sans supervision extérieur).Dans un deuxième temps, nous étudions en détail les approchesd’apprentissage profond pour les problèmes de régression. Les problèmesde régression sont cruciaux dans le contexte de l’interaction homme-robotafin d’obtenir des informations fiables sur les poses de la tête et du corpsdes personnes faisant face au robot. Par conséquent, ces contributions sontvraiment générales et peuvent être appliquées dans de nombreux contextesdifférents. Dans un premier temps, nous proposons de coupler un mélangegaussien de régressions inverses linéaires avec un réseau de neurones convolutionnels. Deuxièmement, nous introduisons un modèle de mélange gaussien-uniforme afin de rendre l’algorithme d’apprentissage plus robuste aux annotations bruitées. Enfin, nous effectuons une étude à grande échelle pour mesurerl’impact de plusieurs choix d’architecture et extraire des recommandationspratiques lors de l’utilisation d’approches d’apprentissage profond dans destâches de régression. Pour chacune de ces contributions, une intense validation expérimentale a été effectuée avec des expériences en temps réel sur lerobot NAO ou sur de larges et divers ensembles de données. / In order to interact with humans, robots need to perform basic perception taskssuch as face detection, human pose estimation or speech recognition. However, in order have a natural interaction with humans, the robot needs to modelhigh level concepts such as speech turns, focus of attention or interactions between participants in a conversation. In this manuscript, we follow a top-downapproach. On the one hand, we present two high-level methods that model collective human behaviors. We propose a model able to recognize activities thatare performed by different groups of people jointly, such as queueing, talking.Our approach handles the general case where several group activities can occur simultaneously and in sequence. On the other hand, we introduce a novelneural network-based reinforcement learning approach for robot gaze control.Our approach enables a robot to learn and adapt its gaze control strategy inthe context of human-robot interaction. The robot is able to learn to focus itsattention on groups of people from its own audio-visual experiences.Second, we study in detail deep learning approaches for regression prob-lems. Regression problems are crucial in the context of human-robot interaction in order to obtain reliable information about head and body poses or theage of the persons facing the robot. Consequently, these contributions are really general and can be applied in many different contexts. First, we proposeto couple a Gaussian mixture of linear inverse regressions with a convolutionalneural network. Second, we introduce a Gaussian-uniform mixture model inorder to make the training algorithm more robust to noisy annotations. Finally,we perform a large-scale study to measure the impact of several architecturechoices and extract practical recommendations when using deep learning approaches in regression tasks. For each of these contributions, a strong experimental validation has been performed with real-time experiments on the NAOrobot or on large and diverse data-sets.
69

Kinematic and Dynamical Analysis Techniques for Human Movement Analysis from Portable Sensing Devices

January 2016 (has links)
abstract: Today's world is seeing a rapid technological advancement in various fields, having access to faster computers and better sensing devices. With such advancements, the task of recognizing human activities has been acknowledged as an important problem, with a wide range of applications such as surveillance, health monitoring and animation. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. An alternative idea I propose is the use of descriptors of the shape of the dynamical attractor as a feature representation for quantification of nature of dynamics. The framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. Approximately 1\% of the total world population are stroke survivors, making it the most common neurological disorder. This increasing demand for rehabilitation facilities has been seen as a significant healthcare problem worldwide. The laborious and expensive process of visual monitoring by physical therapists has motivated my research to invent novel strategies to supplement therapy received in hospital in a home-setting. In this direction, I propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system. The rapid technological advancements in computing and sensing has resulted in large amounts of data which requires powerful tools to analyze. In the recent past, topological data analysis methods have been investigated in various communities, and the work by Carlsson establishes that persistent homology can be used as a powerful topological data analysis approach for effectively analyzing large datasets. I have explored suitable topological data analysis methods and propose a framework for human activity analysis utilizing the same for applications such as action recognition. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
70

Sistema embarcado empregado no reconhecimento de atividades humanas / Embedded system applied in human activities recognition

Ferreira, Willian de Assis Pedrobon [UNESP] 24 August 2017 (has links)
Submitted by Willian de Assis Pedrobon Ferreira null (willianferreira51@gmail.com) on 2017-09-27T13:44:04Z No. of bitstreams: 1 dissertacao_Willian_de_Assis_Pedrobon_Ferreira.pdf: 8549439 bytes, checksum: 8a499577dddc476a2f1f7b3cb4d9a873 (MD5) / Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-09-28T14:15:16Z (GMT) No. of bitstreams: 1 ferreira_wap_me_ilha.pdf: 8549439 bytes, checksum: 8a499577dddc476a2f1f7b3cb4d9a873 (MD5) / Made available in DSpace on 2017-09-28T14:15:16Z (GMT). No. of bitstreams: 1 ferreira_wap_me_ilha.pdf: 8549439 bytes, checksum: 8a499577dddc476a2f1f7b3cb4d9a873 (MD5) Previous issue date: 2017-08-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A utilização de sensores em ambientes inteligentes é fundamental para supervisionar as atividades dos seres humanos. No reconhecimento de atividades humanas, ou HAR (Human Activity Recognition), técnicas de supervisionamento são aplicadas para identificar as atividades realizadas em diversas aplicações, como no esporte e no acompanhamento de pessoas com necessidades especiais. O Sistema de Reconhecimento de Atividades Humanas (SIRAH) é empregado no reconhecimento de atividades humanas, utilizando um acelerômetro localizado na cintura da pessoa monitorada e uma Rede Neural Artificial para classificar sete atividades: em pé, deitado, sentado, caminhar, correr, sentar e levantar. Originalmente implementado no software MATLAB, realizava classificações offline em que os resultados não eram obtidos durante a execução das atividades. Apresenta-se, neste trabalho, o desenvolvimento de duas versões embarcadas do SIRAH, que executam o algoritmo de classificação durante a prática das atividades monitoradas. A primeira implementação foi efetuada no processador Nios II da Altera, que ofereceu a mesma exatidão do sistema offline com processamento limitado, pois o software consome 673 milissegundos para executar a classificação desejada. Para aprimorar o desempenho, outra versão foi implementada em FPGA utilizando a linguagem de descrição de hardware VHDL. O algoritmo de classificação opera em tempo real e é executado em apenas 236 microssegundos, garantindo total amostragem das acelerações. / The use of sensors in smart environments is fundamental to monitor humans activities. In Human Activity Recognation (HAR), supervision techniques are employed to identify activities in several areas, such as in sport pratice and in people monitoring with special needs. The Sistema de Reconhecimento de Atividades Humanas (SIRAH) is used in human activities recognation, using an accelerometer located on the monitored person waist and an Artificial Neural Network to classify seven activities: standing, lying, seated, walking, running, sitting and standing. Originally, performed offline classifications executed in MATLAB software. In this work we present the development of two embedded SIRAH versions, which perform the classification algorithm during the monitored activities practice. The first implementation was performed on Altera’s Nios II processor, that has been provided the same offline system accuracy, but with limited processing. To improve the performance, the other version was implemented in FPGA using the VHDL hardware description language, which performs real-time classifications, ensuring a lossless acceleration sampling.

Page generated in 0.1029 seconds