• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 22
  • 18
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 329
  • 329
  • 70
  • 65
  • 64
  • 55
  • 54
  • 52
  • 50
  • 37
  • 32
  • 27
  • 26
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Application of Automated Facial Expression Analysis and Qualitative Analysis to Assess Consumer Perception and Acceptability of Beverages and Water

Crist, Courtney Alissa 27 April 2016 (has links)
Sensory and consumer sciences aim to understand the influences of product acceptability and purchase decisions. The food industry measures product acceptability through hedonic testing but often does not assess implicit or qualitative response. Incorporation of qualitative research and automated facial expression analysis (AFEA) may supplement hedonic acceptability testing to provide product insights. The purpose of this research was to assess the application of AFEA and qualitative analysis to understand consumer experience and response. In two studies, AFEA was applied to elucidate consumers emotional response to dairy (n=42) and water (n=46) beverages. For dairy, unflavored milk (x=6.6±1.8) and vanilla syrup flavored milk (x=5.9±2.2) (p>0.05) were acceptably rated (1=dislike extremely; 9=like extremely) while salty flavored milk (x=2.3±1.3) was least acceptable (p<0.05). Vanilla syrup flavored milk generated emotions with surprised intermittently present over time (10 sec) (p<0.025) compared to unflavored milk. Salty flavored milk created an intense disgust response among other emotions compared to unflavored milk (p<0.025). Using a bitter solutions model in water, an inverse relationship existed with acceptability as bitter intensity increased (rs=-0.90; p<0.0001). Facial expressions characterized as disgust and happy emotion increased in duration as bitter intensity increased while neutral remained similar across bitter intensities compared to the control (p<0.025). In a mixed methods analysis to enumerate microbial populations, assess water quality, and qualitatively gain consumer insights regarding water fountains and water filling stations, results inferred that water quality differences did not exist between water fountains and water filling stations (metals, pH, chlorine, and microbial) (p>0.05). However, the exterior of water fountains were microbially (8.8 CFU/cm^2) and visually cleaner than filling stations (10.4x10^3 CFU/cm^2) (p<0.05). Qualitative analysis contradicted quantitative findings as participants preferred water filling stations because they felt they were cleaner and delivered higher quality water. Lastly, The Theory of Planned Behavior was able to assist in understanding undergraduates' reusable water bottle behavior and revealed 11 categories (attitudes n=6; subjective norms n=2; perceived behavioral control n=2; intentions n=1). Collectively, the use of AFEA and qualitative analysis provided additional insight to consumer-product interaction and acceptability; however, additional research should include improving the sensitivity of AFEA to consumer product evaluation. / Ph. D.
222

Facial Analysis for Real-Time Application: A Review in Visual Cues Detection Techniques

Yap, Moi Hoon, Ugail, Hassan, Zwiggelaar, R. 30 August 2012 (has links)
Yes / Emerging applications in surveillance, the entertainment industry and other human computer interaction applications have motivated the development of real-time facial analysis research covering detection, tracking and recognition. In this paper, the authors present a review of recent facial analysis for real-time applications, by providing an up-to-date review of research efforts in human computing techniques in the visible domain. The main goal is to provide a comprehensive reference source for researchers, regardless of specific research areas, involved in real-time facial analysis. First, the authors undertake a thorough survey and comparison in face detection techniques. In this survey, they discuss some prominent face detection methods presented in the literature. The performance of the techniques is evaluated by using benchmark databases. Subsequently, the authors provide an overview of the state-of-the-art of facial expressions analysis and the importance of psychology inherent in facial expression analysis. During the last decades, facial expressions analysis has slowly evolved into automatic facial expressions analysis due to the popularity of digital media and the maturity of computer vision. Hence, the authors review some existing automatic facial expressions analysis techniques. Finally, the authors provide an exemplar for the development of a facial analysis real-time application and propose a model for facial analysis. This review shows that facial analysis for real-time application involves multi-disciplinary aspects and it is important to take all domains into account when building a reliable system.
223

Cerebral asymmetry in facial affect perception of women: neuropsychological effects of depression

Crews, William David 05 September 2009 (has links)
Forty right-handed women, half who had been classified as depressed, the other half nondepressed, participated in a tachistoscopic study of the influence of depression on the cerebral hemispheric processing of Ekman and Friesen’s (1976) happy, sad, and neutral emotional faces. A dynamometer was also used as a standardized measure of hemispheric motor functioning such as hand grip strength, perseveration, and fatigue. Results indicated that the depressed women were characterized by elevated levels of both depression and anxiety, suggestive of an agitated, depressive state with heightened arousal. Further, depressed as compared to nondepressed women displayed significantly faster reaction times to sad faces presented their right visual fields and happy faces presented their left visual fields. For the dynamometer data, primary findings indicated that depressed women displayed significantly less perseveration at the left hand as compared to nondepressed women. There was also a trend for depressed as opposed to nondepressed women to show less perseveration at the right hand. These findings from both the tachistoscope and dynamometer data are suggestive of differential arousal of both the left and right cerebral hemispheres and are discussed in light of arousal theory. / Master of Science
224

Application of Automated Facial Expression Analysis and Facial Action Coding System to Assess Affective Response to Consumer Products

Clark, Elizabeth A. 17 March 2020 (has links)
Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through hedonic testing but often does not capture affectual response as it pertains to product-generated (PG) and product-associated (PA) emotions. This research sought to assess the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing literature was performed that focused on the Facial Action Coding System (FACS) and its use to investigate consumer affect and characterize human emotional response to product-based stimuli, which revealed inconsistencies in how FACS is carried out as well as how emotional response is inferred from Action Unit (AU) activation. Automatic Facial Expression Analysis (AFEA), which automates FACS and translates the facial muscular positioning into the basic universal emotions, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). The ST-IAT did not yield significant PA emotions for packaged milk (p>0.05), but correspondence analysis of CATA data produced PA emotion insights including term selection based on arousal and underlying approach/withdrawal motivation related to packaging pigmentation. Time series statistical analysis of AFEA data provided increased insights on significant emotion expression, but the lack of difference (p>0.05) between certain expressed emotions that maintain no related AUs, such as happy and disgust, indicates that AFEA software may not be identifying AUs and determining emotion-based inferences in agreement with FACS. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light-blocking properties) underwent time series statistical analysis to determine if the sensory-engaging nature of control stimuli could impact time series statistical analysis of AFEA data. When compared against the limited sensory engaging (blank screen) control, contempt, happy, and angry were expressed more intensely (p<0.025) and with greater incidence for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (albeit fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken while comparing the control and experimental stimuli in time series analysis as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights on consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for consumer products such as foods and beverages. / Doctor of Philosophy / Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through consumer testing but often does not capture consumer response as it pertains to emotions such as those experienced while directly interacting with a product (i.e., product-generated emotions, PG) or those attributed to the product based on external information such as branding, marketing, nutrition, social environment, physical environment, memories, etc.( product-associated emotions, PA). This research investigated the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing scientific literature was performed that focused on the Facial Action Coding System (FACS), a process used determine facially expressed emotion from facial muscular positioning, and its use to investigate consumer behavior and characterize human emotional response to product-based stimuli; the review revealed inconsistencies in how FACS is carried out as well as how emotional response is determined from facial muscular activation. Automatic Facial Expression Analysis (AFEA), which automates FACS, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). While the ST-IAT did not yield significant results (p>0.05), CATA data produced illustrated term selection based on motivation to approach and/or withdrawal from milk based on packaging color. Additionally, the lack of difference (p>0.05) between emotions that do not produce similar facial muscle activations, such as happy and disgust, indicates that AFEA software may not be determining emotions as outlined in the established FACS procedures. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light blocking properties underwent time series statistical analysis to determine if the nature of the control stimulus itself could impact the analysis of AFEA data. When compared against the limited sensory engaging control (a blank screen), contempt, happy, and angry were expressed more intensely (p<0.025) and consistently for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (although fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights to consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for sensory engaging consumer products such as foods and beverages.
225

Negotiating national face: a comparison of the new york times and the people's daily coverage of the hainan incident

Tu, Lingjiang 01 July 2002 (has links)
No description available.
226

Acknowledgement of emotional facial expression in Mexican college students / Reconocimiento de la expresión facial de la emoción en mexicanos universitarios

Anguas-Wong, Ana María, Matsumoto, David 25 September 2017 (has links)
The aim of this study is to explore the patterns of emotion recognition in Mexican bilin-guals using the JACFEE (Matsumoto & Ekman, 1988). Previous cross cultural research has documented high agreement in judgments of facial expressions of emotion, however, none of the previous studies has included data from Mexican culture. Participants were 229 Mexican college students (mean age 21.79). Results indicate that each of the seven universal emotions: anger, contempt, disgust, fear, happiness, sadness and surprise was recognized by the participants above chance levels (p < .001), regardless of the gender or ethnicity of the posers. These findings replicate reported data on the high cross cultural agreement in emotion recognition (Ekman, 1994) and contribute to the increasing body of evidence regardingthe universality of emotions. / Este estudio explora el reconocimiento de la expresión facial de las emociones en bilingües mexicanos mediante el JACFEE (Matsumoto & Ekman, 1988). Investigaciones previas evidencian el alto nivel de acuerdo transcultural en el reconocimiento emocional, sin embargo no se reportan estudios en la cultura mexicana. Participaron 229 estudiantes universitarios, edad promedio 21.79 años. Los resultados indican que las emociones universales: enojo, desprecio, disgusto, temor, felicidad, tristeza y sorpresa fueron reconocidas más allá del azar (p < .01), independientemente del sexo o nacionalidad del modelo. Estos hallazgos coincidencompletamente con los datos transculturales que se tienen sobre el alto nivel de acuerdo en el reconocimiento emocional (Ekman, 1994), contribuyendo así al creciente cuerpo de evidencia sobre la universalidad de las emociones.
227

De la reconnaissance des expressions faciales à une perception visuelle partagée : une architecture sensori-motrice pour amorcer un référencement social d'objets, de lieux ou de comportements / From facial expressions recognition to joint visual perception : a sensori-motor architecture for the social referencing of objects, places, behaviors.

Boucenna, Sofiane 05 May 2011 (has links)
Cette thèse se concentre sur les interactions émotionnelles en robotique autonome. Le robot doit pouvoir agir et réagir dans un environnement naturel et faire face à des perturbations imprédictibles. Il est donc nécessaire que le robot puisse acquérir une autonomie comportementale à savoir la capacité d'apprentissage et d'adaptation en ligne. En particulier, nous nous proposons d'étudier quels mécanismes introduire pour que le robot ait la capacité de se constituer une perception des objets de son environnement qui puisse être partagée par celle d'un partenaire humain. Le problème sera de faire apprendre à notre robot à préférer certains objets et à éviter d'autres objets. La solution peut être trouvée en psychologie dans ce que l'on appelle "référencement social" ("social referencing") qui consiste à attribuer une valeur à un objet grâce à l'interaction avec un partenaire humain. Dans ce contexte, notre problème est de trouver comment un robot peut apprendre de manière autonome à reconnaître les expressions faciales d'un partenaire humain pour ensuite les utiliser pour donner une valence aux objets et permettre leur discrimination.Nous nous intéresserons à comprendre comment des interactions émotionnelles avec un partenaire peuvent amorcer des comportements de complexité croissante tel que le référencement social. Notre idée est que le référencement social aussi bien que la reconnaissance d'expressions faciales peut émerger d'une architecture sensori-motrice. Sans connaissance de ce que l'autre est, le robot devrait réussir à apprendre des tâches "sociales" de plus en plus complexes. Nous soutenons l'idée que le référencement social peut être amorcé par une simple cascade d'architectures sensori-motrices qui à la base ne sont pas dédiées aux interactions sociales.Cette thèse traite de plusieurs sujets qui ont comme dénominateur commun l'interaction sociale. Nous proposons tout d'abord une architecture capable d'apprendre à reconnaître de manière autonome des expressions faciales primaires grâce à un jeu d'imitation entre une tête expressive et un expérimentateur.Les interactions avec le dispositif robotique commençeraient par l'apprentissage de 5 expressions faciales prototypiques. Nous proposons ensuite une architecture capable de reproduire des mimiques faciales ainsi que leurs différents niveaux d'intensité. La tête expressive pourra reproduire des expressions secondaires par exemple une joie mêlée de colère. Nous verrons également que la discrimination de visages peut émerger de cette interaction émotionnelle à l'aide d'une rythmicité implicite qui se crée entre l'homme et le robot. Enfin, nous proposerons un modèle sensori-moteur ayant la capacité de réaliser un référencement social. Trois situations ont pu être testées: 1) un bras robotique capable d'attraper et de fuir des objets selon les interactions émotionnelles venant du partenaire humain. 2) un robot mobile capable de rejoindre ou d'éviter certaines zones de son environnement. 3) une tête expressive capable d'orienter son regard dans la même direction que l'humain tout en attribuant des valeurs émotionnelles aux objets via l'interaction expressive de l'expérimentateur.Nous montrons ainsi qu'une séquence développementale peut émerger d'une interaction émotionnelle de très bas niveau et que le référencement social peut s'expliquer d'abord à un niveau sensori-moteur sans nécessiter de faire appel à un modèle de théorie de l'esprit. / My thesis focuses on the emotional interaction in autonomous robotics. The robot must be able to act and react in a natural environment and cope with unpredictable pertubations. It is necessary that the robot can acquire a behavioral autonomy, that is to say the ability to learn and adapt on line. In particular, we propose to study what are the mechanisms to introduce so that the robot has the ability to perceive objects in the environment and in addition they can be shared by an experimenter. The problem is to teach the robot to prefer certain objects and avoid other objects. The solution can be found in psychology in the social referencing. This ability allows to associate a value to an object through emotional interaction with a human partner. In this context, our problem is how a robot can autonomously learn to recognize facial expressions of a human partner and then use them to give a emotional valence to objects and allow their discrimination. We focus on understanding how emotional interaction with a partner can bootstrap behavior of increasing complexity such as social referencing. Our idea is that social referencing as well as the recognition of facial expressions can emerge from a sensorimotor architecture. We support the idea that social referencing may be initiated by a simple cascade of sensorimotor architectures which are not dedicated to social interactions. My thesis underlines several topics that have a common denominator: social interaction. We first propose an architecture which is able to learn to recognize facial expressions through an imitation game between an expressive head and an experimenter. The robotic head would begin by learning five prototypical facial expressions. Then, we propose an architecture which can reproduce facial expressions and their different levels of intensity. The robotic head can reproduce expressive more advanced for instance joy mixed with anger. We also show that the face detection can emerge from this emotional interaction thanks to an implicit rhythm that is created between human partner and robot. Finally, we propose a model sensorimotor having the ability to achieve social referencing. Three situations have been tested: 1) a robotic arm is able to catch and avoid objects as emotional interaction from the human partner. 2) a mobile robot is able to reach or avoid certain areas of its environment. 3) an expressive head can orient its gaze in the same direction as humans and addition to associate emotional values to objects according tothe facial expressions of experimenter. We show that a developmental sequence can merge from emotional interaction and that social referencing can be explained a sensorimotor level without needing to use a model theory mind.
228

Automatic Analysis of Facial Actions: Learning from Transductive, Supervised and Unsupervised Frameworks

Chu, Wen-Sheng 01 January 2017 (has links)
Automatic analysis of facial actions (AFA) can reveal a person’s emotion, intention, and physical state, and make possible a wide range of applications. To enable reliable, valid, and efficient AFA, this thesis investigates automatic analysis of facial actions through transductive, supervised and unsupervised learning. Supervised learning for AFA is challenging, in part, because of individual differences among persons in face shape and appearance and variation in video acquisition and context. To improve generalizability across persons, we propose a transductive framework, Selective Transfer Machine (STM), which personalizes generic classifiers through joint sample reweighting and classifier learning. By personalizing classifiers, STM offers improved generalization to unknown persons. As an extension, we develop a variant of STM for use when partially labeled data are available. Additional challenges for supervised learning include learning an optimal representation for classification, variation in base rates of action units (AUs), correlation between AUs and temporal consistency. While these challenges could be partly accommodated with an SVM or STM, a more powerful alternative is afforded by an end-to-end supervised framework (i.e., deep learning). We propose a convolutional network with long short-term memory (LSTM) and multi-label sampling strategies. We compared SVM, STM and deep learning approaches with respect to AU occurrence and intensity in and between BP4D+ [282] and GFT [93] databases, which consist of around 0.6 million annotated frames. Annotated video is not always possible or desirable. We introduce an unsupervised Branch-and-Bound framework to discover correlated facial actions in un-annotated video. We term this approach Common Event Discovery (CED). We evaluate CED in video and motion capture data. CED achieved moderate convergence with supervised approaches and enabled discovery of novel patterns occult to supervised approaches.
229

FACIAL EXPRESSION DISCRIMINATES BETWEEN PAIN AND ABSENCE OF PAIN IN THE NON-COMMUNICATIVE, CRITICALLY ILL ADULT PATIENT

Arif-Rahu, Mamoona 03 December 2010 (has links)
BACKGROUND: Pain assessment is a significant challenge in critically ill adults, especially those unable to communicate their pain level. At present there is no universally accepted pain scale for use in the non-communicative (cognitively impaired, sedated, paralyzed or mechanically ventilated) patient. Facial expressions are considered among the most reflexive and automatic nonverbal indices of pain. The facial expression component of pain assessment tools include a variety of facial descriptors (wincing, frowning, grimacing, smile/relaxed) with inconsistent pain intensity ratings or checklists of behaviors. The lack of consistent facial expression description and quantification of pain intensity makes standardization of pain evaluation difficult. Although use of facial expression is an important behavioral measure of pain intensity, precise and accurate methods for interpreting the specific facial actions of pain in critically ill adults has not been identified. OBJECTIVE: The three specific aims of this prospective study were: 1) to describe facial actions during pain in non-communicative critically ill patients; 2) to determine facial actions that characterize the pain response; 3) to describe the effect of patient factors on facial actions during the pain response. DESIGN: Descriptive, correlational, comparative. SETTING: Two adult critical care units (Surgical Trauma ICU-STICU and Medical Respiratory ICU-MRICU) at an urban university medical center. SUBJECTS: A convenience sample of 50 non-communicative critically ill intubated, mechanically ventilated adult patients. Fifty-two percent were male, 48% Euro-American, with mean age 52.5 years (±17. 2). METHODS: Subjects were video-recorded while in an intensive care unit at rest (baseline phase) and during endotracheal suctioning (procedure phase). Observer-based pain ratings were gathered using the Behavioral Pain Scale. Facial actions were coded from video using the Facial Action Coding System (FACS) over a 30 second time period for each phase. Pain scores were calculated from FACS action units (AU) following Prkachin and Solomon metric. RESULTS: Fourteen facial action units were associated with pain response and found to occur more frequently during the noxious procedure than during baseline. These included areas of brow raiser, brow lower, orbit tightening, eye closure, head movements, mouth opening, nose wrinkling, and nasal dilatation, and chin raise. The sum of intensity of the 14 AUs was correlated with BPS (r=0.70, P<0.0001) and with the facial expression component of BPS (r=0.58, P<0.0001) during procedure. A stepwise multivariate analysis predicted 5 pain-relevant facial AUs [brow raiser (AU 1), brow lower (AU 4), nose wrinkling (AU 9), head turned right (AU 52), and head turned up (AU53)] that accounted for 71% of the variance (Adjusted R2=0.682) in pain response (F= 21.99, df=49, P<0.0001). The FACS pain intensity score based on 5 pain-relevant facial AUs was associated with BPS (r=0.77, P<0.0001) and with the facial expression component of BPS (r=0.63, P<0.0001) during procedure. Patient factors (e. g., age, gender, race, and diagnosis, duration of endotracheal intubation, ICU length of stay, and analgesic and sedative drug usages, and severity of illness) were not associated with the FACS pain intensity score. CONCLUSIONS: Overall, the FACS pain intensity score composed of inner brow raiser, brow lower, nose wrinkle, and head movements reflected a general pain action in our study. Upper facial expression provides an important behavioral measure of pain which may be used in the clinical evaluation of pain in the non-communicative critically ill patients. These results provide preliminary results that the Facial Action Coding System can discriminate a patient’s acute pain experience.
230

The communication of emotional meaning among Chinese students in Hong Kong.

January 1978 (has links)
Anthony Chan Yuk Cheung. / Theses (M.Ed.)--Chinese University of Hong Kong. / Bibliography: leaves [57]-60.

Page generated in 0.0892 seconds