1 |
Application of Automated Facial Expression Analysis and Qualitative Analysis to Assess Consumer Perception and Acceptability of Beverages and WaterCrist, Courtney Alissa 27 April 2016 (has links)
Sensory and consumer sciences aim to understand the influences of product acceptability and purchase decisions. The food industry measures product acceptability through hedonic testing but often does not assess implicit or qualitative response. Incorporation of qualitative research and automated facial expression analysis (AFEA) may supplement hedonic acceptability testing to provide product insights. The purpose of this research was to assess the application of AFEA and qualitative analysis to understand consumer experience and response. In two studies, AFEA was applied to elucidate consumers emotional response to dairy (n=42) and water (n=46) beverages. For dairy, unflavored milk (x=6.6±1.8) and vanilla syrup flavored milk (x=5.9±2.2) (p>0.05) were acceptably rated (1=dislike extremely; 9=like extremely) while salty flavored milk (x=2.3±1.3) was least acceptable (p<0.05). Vanilla syrup flavored milk generated emotions with surprised intermittently present over time (10 sec) (p<0.025) compared to unflavored milk. Salty flavored milk created an intense disgust response among other emotions compared to unflavored milk (p<0.025). Using a bitter solutions model in water, an inverse relationship existed with acceptability as bitter intensity increased (rs=-0.90; p<0.0001). Facial expressions characterized as disgust and happy emotion increased in duration as bitter intensity increased while neutral remained similar across bitter intensities compared to the control (p<0.025). In a mixed methods analysis to enumerate microbial populations, assess water quality, and qualitatively gain consumer insights regarding water fountains and water filling stations, results inferred that water quality differences did not exist between water fountains and water filling stations (metals, pH, chlorine, and microbial) (p>0.05). However, the exterior of water fountains were microbially (8.8 CFU/cm^2) and visually cleaner than filling stations (10.4x10^3 CFU/cm^2) (p<0.05). Qualitative analysis contradicted quantitative findings as participants preferred water filling stations because they felt they were cleaner and delivered higher quality water. Lastly, The Theory of Planned Behavior was able to assist in understanding undergraduates' reusable water bottle behavior and revealed 11 categories (attitudes n=6; subjective norms n=2; perceived behavioral control n=2; intentions n=1). Collectively, the use of AFEA and qualitative analysis provided additional insight to consumer-product interaction and acceptability; however, additional research should include improving the sensitivity of AFEA to consumer product evaluation. / Ph. D.
|
2 |
Application of Automated Facial Expression Analysis and Facial Action Coding System to Assess Affective Response to Consumer ProductsClark, Elizabeth A. 17 March 2020 (has links)
Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through hedonic testing but often does not capture affectual response as it pertains to product-generated (PG) and product-associated (PA) emotions. This research sought to assess the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing literature was performed that focused on the Facial Action Coding System (FACS) and its use to investigate consumer affect and characterize human emotional response to product-based stimuli, which revealed inconsistencies in how FACS is carried out as well as how emotional response is inferred from Action Unit (AU) activation. Automatic Facial Expression Analysis (AFEA), which automates FACS and translates the facial muscular positioning into the basic universal emotions, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). The ST-IAT did not yield significant PA emotions for packaged milk (p>0.05), but correspondence analysis of CATA data produced PA emotion insights including term selection based on arousal and underlying approach/withdrawal motivation related to packaging pigmentation. Time series statistical analysis of AFEA data provided increased insights on significant emotion expression, but the lack of difference (p>0.05) between certain expressed emotions that maintain no related AUs, such as happy and disgust, indicates that AFEA software may not be identifying AUs and determining emotion-based inferences in agreement with FACS. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light-blocking properties) underwent time series statistical analysis to determine if the sensory-engaging nature of control stimuli could impact time series statistical analysis of AFEA data. When compared against the limited sensory engaging (blank screen) control, contempt, happy, and angry were expressed more intensely (p<0.025) and with greater incidence for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (albeit fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken while comparing the control and experimental stimuli in time series analysis as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights on consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for consumer products such as foods and beverages. / Doctor of Philosophy / Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through consumer testing but often does not capture consumer response as it pertains to emotions such as those experienced while directly interacting with a product (i.e., product-generated emotions, PG) or those attributed to the product based on external information such as branding, marketing, nutrition, social environment, physical environment, memories, etc.( product-associated emotions, PA). This research investigated the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing scientific literature was performed that focused on the Facial Action Coding System (FACS), a process used determine facially expressed emotion from facial muscular positioning, and its use to investigate consumer behavior and characterize human emotional response to product-based stimuli; the review revealed inconsistencies in how FACS is carried out as well as how emotional response is determined from facial muscular activation. Automatic Facial Expression Analysis (AFEA), which automates FACS, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). While the ST-IAT did not yield significant results (p>0.05), CATA data produced illustrated term selection based on motivation to approach and/or withdrawal from milk based on packaging color. Additionally, the lack of difference (p>0.05) between emotions that do not produce similar facial muscle activations, such as happy and disgust, indicates that AFEA software may not be determining emotions as outlined in the established FACS procedures. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light blocking properties underwent time series statistical analysis to determine if the nature of the control stimulus itself could impact the analysis of AFEA data. When compared against the limited sensory engaging control (a blank screen), contempt, happy, and angry were expressed more intensely (p<0.025) and consistently for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (although fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights to consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for sensory engaging consumer products such as foods and beverages.
|
3 |
Visual Observation of Human Emotions / L'observation visuelle des émotions humainesJain, Varun 30 March 2015 (has links)
Cette thèse a pour sujet le développement de méthodes et de techniques permettant d'inférer l'état affectif d'une personne à partir d'informations visuelles. Plus précisement, nous nous intéressons à l'analyse d'expressions du visage, puisque le visage est la partie la mieux visible du corps, et que l'expression du visage est la manifestation la plus évidente de l'affect. Nous étudions différentes théories psychologiques concernant affect et émotions, et différentes facons de représenter et de classifier les émotions d'une part et la relation entre expression du visage et émotion sousjacente d'autre part. Nous présentons les dérivées Gaussiennes multi-échelle en tant que descripteur dímages pour l'estimation de la pose de la tête, pour la détection de sourire, puis aussi pour la mesure de l'affect. Nous utilisons l'analyse en composantes principales pour la réduction de la dimensionalité, et les machines à support de vecteur pour la classification et la regression. Nous appliquons cette même architecture, simple et efficace, aux différents problèmes que sont l'estimation de la pose de tête, la détection de sourire, et la mesure d'affect. Nous montrons que non seulement les dérivées Gaussiennes multi-échelle ont une performance supérieure aux populaires filtres de Gabor, mais qu'elles sont également moins coûteuses en calculs. Lors de nos expérimentations nous avons constaté que dans le cas d'un éclairage partiel du visage les dérivées Gaussiennes multi-échelle ne fournissent pas une description d'image suffisamment discriminante. Pour résoudre ce problème nous combinons des dérivées Gaussiennes avec des histogrammes locaux de type LBP (Local Binary Pattern). Avec cette combinaison nous obtenons des résultats à la hauteur de l'état de l'art pour la détection de sourire dans le base d'images GENKI qui comporte des images de personnes trouvées «dans la nature» sur internet, et avec la difficile «extended YaleB database». Pour la classification dans la reconnaissance de visage nous utilisons un apprentissage métrique avec comme mesure de similarité une distance de Minkowski. Nous obtenons le résultat que les normes L1 and L2 ne fournissent pas toujours la distance optimale; cet optimum est souvent obtenu avec une norme Lp où p n'est pas entier. Finalement, nous développons un système multi-modal pour la détection de dépressions nerveuses, avec en entrée des informations audio et vidéo. Pour la détection de mouvements intra-faciaux dans les données vidéo nous utilisons de descripteurs de type LBP-TOP (Local Binary Patterns -Three Orthogonal Planes), alors que nous utilisons des trajectoires denses pour les mouvements plus globaux, par exemple de la tête ou des épaules. Nous avons trouvé que les descripteurs LBP-TOP encodés avec des vecteurs de Fisher suffisent pour dépasser la performance de la méthode de reférence dans la compétition «Audio Visual Emotion Challenge (AVEC) 2014». Nous disposons donc d'une technique effective pour l'evaluation de l'état dépressif, technique qui peut aisement être étendue à d'autres formes d'émotions qui varient lentement, comme l'humeur (mood an Anglais). / In this thesis we focus on the development of methods and techniques to infer affect from visual information. We focus on facial expression analysis since the face is one of the least occluded parts of the body and facial expressions are one of the most visible manifestations of affect. We explore the different psychological theories on affect and emotion, different ways to represent and classify emotions and the relationship between facial expressions and underlying emotions. We present the use of multiscale Gaussian derivatives as an image descriptor for head pose estimation, smile detection before using it for affect sensing. Principal Component Analysis is used for dimensionality reduction while Support Vector Machines are used for classification and regression. We are able to employ the same, simple and effective architecture for head pose estimation, smile detection and affect sensing. We also demonstrate that not only do multiscale Gaussian derivatives perform better than the popular Gabor Filters but are also computationally less expensive to compute. While performing these experiments we discovered that multiscale Gaussian derivatives do not provide an appropriately discriminative image description when the face is only partly illuminated. We overcome this problem by combining Gaussian derivatives with Local Binary Pattern (LBP) histograms. This combination helps us achieve state-of-the-art results for smile detection on the benchmark GENKI database which contains images of people in the "wild" collected from the internet. We use the same description method for face recognition on the CMU-PIE database and the challenging extended YaleB database and our results compare well with the state-of-the-art. In the case of face recognition we use metric learning for classification, adopting the Minkowski distance as the similarity measure. We find that L1 and L2 norms are not always the optimum distance metrics and the optimum is often an Lp norm where p is not an integer. Lastly we develop a multi-modal system for depression estimation with audio and video information as input. We use Local Binary Patterns -Three Orthogonal Planes (LBP-TOP) features to capture intra-facial movements in the videos and dense trajectories for macro movements such as the movement of the head and shoulders. These video features along with Low Level Descriptor (LLD) audio features are encoded using Fisher Vectors and finally a Support Vector Machine is used for regression. We discover that the LBP-TOP features encoded with Fisher Vectors alone are enough to outperform the baseline method on the Audio Visual Emotion Challenge (AVEC) 2014 database. We thereby present an effective technique for depression estimation which can be easily extended for other slowly varying aspects of emotions such as mood.
|
4 |
Visual Observation of Human Emotions / L'observation visuelle des émotions humainesJain, Varun 30 March 2015 (has links)
Cette thèse a pour sujet le développement de méthodes et de techniques permettant d'inférer l'état affectif d'une personne à partir d'informations visuelles. Plus précisement, nous nous intéressons à l'analyse d'expressions du visage, puisque le visage est la partie la mieux visible du corps, et que l'expression du visage est la manifestation la plus évidente de l'affect. Nous étudions différentes théories psychologiques concernant affect et émotions, et différentes facons de représenter et de classifier les émotions d'une part et la relation entre expression du visage et émotion sousjacente d'autre part. Nous présentons les dérivées Gaussiennes multi-échelle en tant que descripteur dímages pour l'estimation de la pose de la tête, pour la détection de sourire, puis aussi pour la mesure de l'affect. Nous utilisons l'analyse en composantes principales pour la réduction de la dimensionalité, et les machines à support de vecteur pour la classification et la regression. Nous appliquons cette même architecture, simple et efficace, aux différents problèmes que sont l'estimation de la pose de tête, la détection de sourire, et la mesure d'affect. Nous montrons que non seulement les dérivées Gaussiennes multi-échelle ont une performance supérieure aux populaires filtres de Gabor, mais qu'elles sont également moins coûteuses en calculs. Lors de nos expérimentations nous avons constaté que dans le cas d'un éclairage partiel du visage les dérivées Gaussiennes multi-échelle ne fournissent pas une description d'image suffisamment discriminante. Pour résoudre ce problème nous combinons des dérivées Gaussiennes avec des histogrammes locaux de type LBP (Local Binary Pattern). Avec cette combinaison nous obtenons des résultats à la hauteur de l'état de l'art pour la détection de sourire dans le base d'images GENKI qui comporte des images de personnes trouvées «dans la nature» sur internet, et avec la difficile «extended YaleB database». Pour la classification dans la reconnaissance de visage nous utilisons un apprentissage métrique avec comme mesure de similarité une distance de Minkowski. Nous obtenons le résultat que les normes L1 and L2 ne fournissent pas toujours la distance optimale; cet optimum est souvent obtenu avec une norme Lp où p n'est pas entier. Finalement, nous développons un système multi-modal pour la détection de dépressions nerveuses, avec en entrée des informations audio et vidéo. Pour la détection de mouvements intra-faciaux dans les données vidéo nous utilisons de descripteurs de type LBP-TOP (Local Binary Patterns -Three Orthogonal Planes), alors que nous utilisons des trajectoires denses pour les mouvements plus globaux, par exemple de la tête ou des épaules. Nous avons trouvé que les descripteurs LBP-TOP encodés avec des vecteurs de Fisher suffisent pour dépasser la performance de la méthode de reférence dans la compétition «Audio Visual Emotion Challenge (AVEC) 2014». Nous disposons donc d'une technique effective pour l'evaluation de l'état dépressif, technique qui peut aisement être étendue à d'autres formes d'émotions qui varient lentement, comme l'humeur (mood an Anglais). / In this thesis we focus on the development of methods and techniques to infer affect from visual information. We focus on facial expression analysis since the face is one of the least occluded parts of the body and facial expressions are one of the most visible manifestations of affect. We explore the different psychological theories on affect and emotion, different ways to represent and classify emotions and the relationship between facial expressions and underlying emotions. We present the use of multiscale Gaussian derivatives as an image descriptor for head pose estimation, smile detection before using it for affect sensing. Principal Component Analysis is used for dimensionality reduction while Support Vector Machines are used for classification and regression. We are able to employ the same, simple and effective architecture for head pose estimation, smile detection and affect sensing. We also demonstrate that not only do multiscale Gaussian derivatives perform better than the popular Gabor Filters but are also computationally less expensive to compute. While performing these experiments we discovered that multiscale Gaussian derivatives do not provide an appropriately discriminative image description when the face is only partly illuminated. We overcome this problem by combining Gaussian derivatives with Local Binary Pattern (LBP) histograms. This combination helps us achieve state-of-the-art results for smile detection on the benchmark GENKI database which contains images of people in the "wild" collected from the internet. We use the same description method for face recognition on the CMU-PIE database and the challenging extended YaleB database and our results compare well with the state-of-the-art. In the case of face recognition we use metric learning for classification, adopting the Minkowski distance as the similarity measure. We find that L1 and L2 norms are not always the optimum distance metrics and the optimum is often an Lp norm where p is not an integer. Lastly we develop a multi-modal system for depression estimation with audio and video information as input. We use Local Binary Patterns -Three Orthogonal Planes (LBP-TOP) features to capture intra-facial movements in the videos and dense trajectories for macro movements such as the movement of the head and shoulders. These video features along with Low Level Descriptor (LLD) audio features are encoded using Fisher Vectors and finally a Support Vector Machine is used for regression. We discover that the LBP-TOP features encoded with Fisher Vectors alone are enough to outperform the baseline method on the Audio Visual Emotion Challenge (AVEC) 2014 database. We thereby present an effective technique for depression estimation which can be easily extended for other slowly varying aspects of emotions such as mood.
|
5 |
IoT DEVELOPMENT FOR HEALTHY INDEPENDENT LIVINGGreene, Shalom 01 January 2017 (has links)
The rise of internet connected devices has enabled the home with a vast amount of enhancements to make life more convenient. These internet connected devices can be used to form a community of devices known as the internet of things (IoT). There is great value in IoT devices to promote healthy independent living for older adults.
Fall-related injuries has been one of the leading causes of death in older adults. For example, every year more than a third of people over 65 in the U.S. experience a fall, of which up to 30 percent result in moderate to severe injury. Therefore, this thesis proposes an IoT-based fall detection system for smart home environments that not only to send out alerts, but also launches interaction models, such as voice assistance and camera monitoring. Such connectivity could allow older adults to interact with the system without concern of a learning curve. The proposed IoT-based fall detection system will enable family and caregivers to be immediately notified of the event and remotely monitor the individual. Integrated within a smart home environment, the proposed IoT-based fall detection system can improve the quality of life among older adults.
Along with the physical concerns of health, psychological stress is also a great concern among older adults. Stress has been linked to emotional and physical conditions such as depression, anxiety, heart attacks, stroke, etc. Increased susceptibility to stress may accelerate cognitive decline resulting in conversion of cognitively normal older adults to MCI (Mild Cognitive Impairment), and MCI to dementia. Thus, if stress can be measured, there can be countermeasures put in place to reduce stress and its negative effects on the psychological and physical health of older adults. This thesis presents a framework that can be used to collect and pre-process physiological data for the purpose of validating galvanic skin response (GSR), heart rate (HR), and emotional valence (EV) measurements against the cortisol and self-reporting benchmarks for stress detection. The results of this framework can be used for feature extraction to feed into a regression model for validating each combination of physiological measurement. Also, the potential of this framework to automate stress protocols like the Trier Social Stress Test (TSST) could pave the way for an IoT-based platform for automated stress detection and management.
|
Page generated in 0.121 seconds