• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

IoT DEVELOPMENT FOR HEALTHY INDEPENDENT LIVING

Greene, Shalom 01 January 2017 (has links)
The rise of internet connected devices has enabled the home with a vast amount of enhancements to make life more convenient. These internet connected devices can be used to form a community of devices known as the internet of things (IoT). There is great value in IoT devices to promote healthy independent living for older adults. Fall-related injuries has been one of the leading causes of death in older adults. For example, every year more than a third of people over 65 in the U.S. experience a fall, of which up to 30 percent result in moderate to severe injury. Therefore, this thesis proposes an IoT-based fall detection system for smart home environments that not only to send out alerts, but also launches interaction models, such as voice assistance and camera monitoring. Such connectivity could allow older adults to interact with the system without concern of a learning curve. The proposed IoT-based fall detection system will enable family and caregivers to be immediately notified of the event and remotely monitor the individual. Integrated within a smart home environment, the proposed IoT-based fall detection system can improve the quality of life among older adults. Along with the physical concerns of health, psychological stress is also a great concern among older adults. Stress has been linked to emotional and physical conditions such as depression, anxiety, heart attacks, stroke, etc. Increased susceptibility to stress may accelerate cognitive decline resulting in conversion of cognitively normal older adults to MCI (Mild Cognitive Impairment), and MCI to dementia. Thus, if stress can be measured, there can be countermeasures put in place to reduce stress and its negative effects on the psychological and physical health of older adults. This thesis presents a framework that can be used to collect and pre-process physiological data for the purpose of validating galvanic skin response (GSR), heart rate (HR), and emotional valence (EV) measurements against the cortisol and self-reporting benchmarks for stress detection. The results of this framework can be used for feature extraction to feed into a regression model for validating each combination of physiological measurement. Also, the potential of this framework to automate stress protocols like the Trier Social Stress Test (TSST) could pave the way for an IoT-based platform for automated stress detection and management.
12

Représentation invariante des expressions faciales. : Application en analyse multimodale des émotions. / Invariant Representation of Facial Expressions : Application to Multimodal Analysis of Emotions

Soladié, Catherine 13 December 2013 (has links)
De plus en plus d’applications ont pour objectif d’automatiser l’analyse des comportements humains afin d’aider les experts qui réalisent actuellement ces analyses. Cette thèse traite de l’analyse des expressions faciales qui fournissent des informations clefs sur ces comportements.Les travaux réalisés portent sur une solution innovante, basée sur l’organisation des expressions, permettant de définir efficacement une expression d’un visage.Nous montrons que l’organisation des expressions, telle que définie, est universelle : une expression est alors caractérisée par son intensité et sa position relative par rapport aux autres expressions. La solution est comparée aux méthodes classiques et montre une augmentation significative des résultats de reconnaissance sur 14 expressions non basiques. La méthode a été étendue à des sujets inconnus. L’idée principale est de créer un espace d’apparence plausible spécifique à la personne inconnue en synthétisant ses expressions basiques à partir de déformations apprises sur d’autres sujets et appliquées sur le neutre du sujet inconnu. La solution est aussi mise à l’épreuve dans un environnement multimodal dont l’objectif est la reconnaissance d’émotions lors de conversations spontanées. Notre méthode a été mise en œuvre dans le cadre du challenge international AVEC 2012 (Audio/Visual Emotion Challenge) où nous avons fini 2nd, avec des taux de reconnaissance très proches de ceux obtenus par les vainqueurs. La comparaison des deux méthodes (la nôtre et celles des vainqueurs) semble montrer que l’extraction des caractéristiques pertinentes est la clef de tels systèmes. / More and more applications aim at automating the analysis of human behavior to assist or replace the experts who are conducting these analyzes. This thesis deals with the analysis of facial expressions, which provide key information on these behaviors.Our work proposes an innovative solution to effectively define a facial expression, regardless of the morphology of the subject. The approach is based on the organization of expressions.We show that the organization of expressions, such as defined, is universal and can be effectively used to uniquely define an expression. One expression is given by its intensity and its relative position to the other expressions. The solution is compared with the conventional methods based on appearance data and shows a significant increase in recognition results of 14 non-basic expressions. The method has been extended to unknown subjects. The main idea is to create a plausible appearance space dedicated to the unknown person by synthesizing its basic expressions from deformations learned on other subjects and applied to the neutral face of the unknown subject. The solution is tested in a more comprehensive multimodal environment, whose aim is the recognition of emotions in spontaneous conversations. Our method has been implemented in the international challenge AVEC 2012 (Audio / Visual Emotion Challenge) where we finished 2nd, with recognition rates very close to the winners’ ones. Comparison of both methods (ours and the winners’ one) seems to show that the extraction of relevant features is the key to such systems.
13

Expressing emotions through vibration for perception and control / Expressing emotions through vibration

ur Réhman, Shafiq January 2010 (has links)
This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction. / Taktil Video

Page generated in 0.0725 seconds