• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluating Consumer Emotional Response to Beverage Sweeteners through Facial Expression Analysis

Leitch, Kristen Allison 23 June 2015 (has links)
Emotional processing and characterization of internal and external stimuli is believed to play an integral role in consumer acceptance or rejection of food products. In this research three experiments were completed with the ultimate goal of adding to the growing body of research pertaining to food, emotions and acceptance using traditional affective sensory methods in combination with implicit (uncontrollable) and explicit (cognitive) emotional measures. Sweetness equivalence of several artificial (acesulfame potassium, saccharin and sucralose) and natural (42% high fructose corn syrup and honey) sweeteners were established to a 5% sucrose solution. Differences in consumer acceptability and emotional response to sucrose (control) and four equi-sweet alternatives (acesulfame potassium, high fructose corn syrup, honey, and sucralose) in tea were evaluated using a 9-point hedonic scale, check-all-that-apply (CATA) emotion term questionnaire (explicit), and automated facial expression analysis (AFEA) (implicit). Facial expression responses and emotion term categorization based on selection frequencies were able to adequately discern differences in emotional response as it related to hedonic liking between sweetener categories (artificial; natural). The potential influence of varying product information on consumer acceptance and emotional responses was then evaluated in relation to three sweeteners (sucrose, ace-k, HFCS) in tea solutions. Observed differences in liking and emotional term characterizations based on the validity of product information for sweeteners were attributed to cognitive dissonance. False informational cues had an observed dampening effect on the implicit emotional response to alternative sweeteners. Significant moderate correlations between liking and several basic emotions supported the belief that implicit emotions are contextually specific. Limitations pertaining to AFEA data collection and emotional interpretations to sweeteners include high panelist variability (within and across), calibration techniques, video quality, software sensitivity, and a general lack of consistency concerning methods of analysis. When used in conjunction with traditional affective methodology and cognitive emotional characterization, AFEA provides an additional layer of valued information about the consumer food experience. / Master of Science in Life Sciences
2

Facial Image Based Expression Classification System Using Committee Neural Networks

Paknikar, Gayatri Suhas 02 September 2008 (has links)
No description available.
3

Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers

Weber, Marlene January 2018 (has links)
The automotive industry is facing significant technological and sociological shifts, calling for an improved understanding of driver and passenger behaviours, emotions and needs, and a transformation of the traditional automotive design process. This research takes a human-centred approach to automotive research, investigating the users' emotional states during automobile driving, with the goal to develop a framework for automotive emotion research, thus enabling the integration of technological advances into the driving environment. A literature review of human emotion and emotion in an automotive context was conducted, followed by three driving studies investigating emotion through Facial-Expression Analysis (FEA): An exploratory study investigated whether emotion elicitation can be applied in driving simulators, and if FEA can detect the emotions triggered. The results allowed confidence in the applicability of emotion elicitation to a lab-based environment to trigger emotional responses, and FEA to detect those. An on-road driving study was conducted in a natural setting to investigate whether natures and frequencies of emotion events could be automatically measured. The possibility of assigning triggers to those was investigated. Overall, 730 emotion events were detected during a total driving time of 440 minutes, and event triggers were assigned to 92% of the emotion events. A similar second on-road study was conducted in a partially controlled setting on a planned road circuit. In 840 minutes, 1947 emotion events were measured, and triggers were successfully assigned to 94% of those. The differences in natures, frequencies and causes of emotions on different road types were investigated. Comparison of emotion events for different roads demonstrated substantial variances of natures, frequencies and triggers of emotions on different road types. The results showed that emotions play a significant role during automobile driving. The possibility of assigning triggers can be used to create a better understanding of causes of emotions in the automotive habitat. Both on-road studies were compared through statistical analysis to investigate influences of the different study settings. Certain conditions (e.g. driving setting, social interaction) showed significant influence on emotions during driving. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified. The methodology and results can be applied to design and research processes, allowing the identification of issues and opportunities in current automotive design to address challenges of future automotive design. Suggested future research includes the investigation of a wider variety of road types and situations, testing with different automobiles and the combination of multiple measurement techniques.
4

Application of Automated Facial Expression Analysis and Qualitative Analysis to Assess Consumer Perception and Acceptability of Beverages and Water

Crist, Courtney Alissa 27 April 2016 (has links)
Sensory and consumer sciences aim to understand the influences of product acceptability and purchase decisions. The food industry measures product acceptability through hedonic testing but often does not assess implicit or qualitative response. Incorporation of qualitative research and automated facial expression analysis (AFEA) may supplement hedonic acceptability testing to provide product insights. The purpose of this research was to assess the application of AFEA and qualitative analysis to understand consumer experience and response. In two studies, AFEA was applied to elucidate consumers emotional response to dairy (n=42) and water (n=46) beverages. For dairy, unflavored milk (x=6.6±1.8) and vanilla syrup flavored milk (x=5.9±2.2) (p>0.05) were acceptably rated (1=dislike extremely; 9=like extremely) while salty flavored milk (x=2.3±1.3) was least acceptable (p<0.05). Vanilla syrup flavored milk generated emotions with surprised intermittently present over time (10 sec) (p<0.025) compared to unflavored milk. Salty flavored milk created an intense disgust response among other emotions compared to unflavored milk (p<0.025). Using a bitter solutions model in water, an inverse relationship existed with acceptability as bitter intensity increased (rs=-0.90; p<0.0001). Facial expressions characterized as disgust and happy emotion increased in duration as bitter intensity increased while neutral remained similar across bitter intensities compared to the control (p<0.025). In a mixed methods analysis to enumerate microbial populations, assess water quality, and qualitatively gain consumer insights regarding water fountains and water filling stations, results inferred that water quality differences did not exist between water fountains and water filling stations (metals, pH, chlorine, and microbial) (p>0.05). However, the exterior of water fountains were microbially (8.8 CFU/cm^2) and visually cleaner than filling stations (10.4x10^3 CFU/cm^2) (p<0.05). Qualitative analysis contradicted quantitative findings as participants preferred water filling stations because they felt they were cleaner and delivered higher quality water. Lastly, The Theory of Planned Behavior was able to assist in understanding undergraduates' reusable water bottle behavior and revealed 11 categories (attitudes n=6; subjective norms n=2; perceived behavioral control n=2; intentions n=1). Collectively, the use of AFEA and qualitative analysis provided additional insight to consumer-product interaction and acceptability; however, additional research should include improving the sensitivity of AFEA to consumer product evaluation. / Ph. D.
5

Facial Analysis for Real-Time Application: A Review in Visual Cues Detection Techniques

Yap, Moi Hoon, Ugail, Hassan, Zwiggelaar, R. 30 August 2012 (has links)
Yes / Emerging applications in surveillance, the entertainment industry and other human computer interaction applications have motivated the development of real-time facial analysis research covering detection, tracking and recognition. In this paper, the authors present a review of recent facial analysis for real-time applications, by providing an up-to-date review of research efforts in human computing techniques in the visible domain. The main goal is to provide a comprehensive reference source for researchers, regardless of specific research areas, involved in real-time facial analysis. First, the authors undertake a thorough survey and comparison in face detection techniques. In this survey, they discuss some prominent face detection methods presented in the literature. The performance of the techniques is evaluated by using benchmark databases. Subsequently, the authors provide an overview of the state-of-the-art of facial expressions analysis and the importance of psychology inherent in facial expression analysis. During the last decades, facial expressions analysis has slowly evolved into automatic facial expressions analysis due to the popularity of digital media and the maturity of computer vision. Hence, the authors review some existing automatic facial expressions analysis techniques. Finally, the authors provide an exemplar for the development of a facial analysis real-time application and propose a model for facial analysis. This review shows that facial analysis for real-time application involves multi-disciplinary aspects and it is important to take all domains into account when building a reliable system.
6

Application of Automated Facial Expression Analysis and Facial Action Coding System to Assess Affective Response to Consumer Products

Clark, Elizabeth A. 17 March 2020 (has links)
Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through hedonic testing but often does not capture affectual response as it pertains to product-generated (PG) and product-associated (PA) emotions. This research sought to assess the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing literature was performed that focused on the Facial Action Coding System (FACS) and its use to investigate consumer affect and characterize human emotional response to product-based stimuli, which revealed inconsistencies in how FACS is carried out as well as how emotional response is inferred from Action Unit (AU) activation. Automatic Facial Expression Analysis (AFEA), which automates FACS and translates the facial muscular positioning into the basic universal emotions, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). The ST-IAT did not yield significant PA emotions for packaged milk (p>0.05), but correspondence analysis of CATA data produced PA emotion insights including term selection based on arousal and underlying approach/withdrawal motivation related to packaging pigmentation. Time series statistical analysis of AFEA data provided increased insights on significant emotion expression, but the lack of difference (p>0.05) between certain expressed emotions that maintain no related AUs, such as happy and disgust, indicates that AFEA software may not be identifying AUs and determining emotion-based inferences in agreement with FACS. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light-blocking properties) underwent time series statistical analysis to determine if the sensory-engaging nature of control stimuli could impact time series statistical analysis of AFEA data. When compared against the limited sensory engaging (blank screen) control, contempt, happy, and angry were expressed more intensely (p<0.025) and with greater incidence for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (albeit fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken while comparing the control and experimental stimuli in time series analysis as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights on consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for consumer products such as foods and beverages. / Doctor of Philosophy / Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through consumer testing but often does not capture consumer response as it pertains to emotions such as those experienced while directly interacting with a product (i.e., product-generated emotions, PG) or those attributed to the product based on external information such as branding, marketing, nutrition, social environment, physical environment, memories, etc.( product-associated emotions, PA). This research investigated the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing scientific literature was performed that focused on the Facial Action Coding System (FACS), a process used determine facially expressed emotion from facial muscular positioning, and its use to investigate consumer behavior and characterize human emotional response to product-based stimuli; the review revealed inconsistencies in how FACS is carried out as well as how emotional response is determined from facial muscular activation. Automatic Facial Expression Analysis (AFEA), which automates FACS, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). While the ST-IAT did not yield significant results (p>0.05), CATA data produced illustrated term selection based on motivation to approach and/or withdrawal from milk based on packaging color. Additionally, the lack of difference (p>0.05) between emotions that do not produce similar facial muscle activations, such as happy and disgust, indicates that AFEA software may not be determining emotions as outlined in the established FACS procedures. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light blocking properties underwent time series statistical analysis to determine if the nature of the control stimulus itself could impact the analysis of AFEA data. When compared against the limited sensory engaging control (a blank screen), contempt, happy, and angry were expressed more intensely (p<0.025) and consistently for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (although fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights to consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for sensory engaging consumer products such as foods and beverages.
7

Automatic Analysis of Facial Actions: Learning from Transductive, Supervised and Unsupervised Frameworks

Chu, Wen-Sheng 01 January 2017 (has links)
Automatic analysis of facial actions (AFA) can reveal a person’s emotion, intention, and physical state, and make possible a wide range of applications. To enable reliable, valid, and efficient AFA, this thesis investigates automatic analysis of facial actions through transductive, supervised and unsupervised learning. Supervised learning for AFA is challenging, in part, because of individual differences among persons in face shape and appearance and variation in video acquisition and context. To improve generalizability across persons, we propose a transductive framework, Selective Transfer Machine (STM), which personalizes generic classifiers through joint sample reweighting and classifier learning. By personalizing classifiers, STM offers improved generalization to unknown persons. As an extension, we develop a variant of STM for use when partially labeled data are available. Additional challenges for supervised learning include learning an optimal representation for classification, variation in base rates of action units (AUs), correlation between AUs and temporal consistency. While these challenges could be partly accommodated with an SVM or STM, a more powerful alternative is afforded by an end-to-end supervised framework (i.e., deep learning). We propose a convolutional network with long short-term memory (LSTM) and multi-label sampling strategies. We compared SVM, STM and deep learning approaches with respect to AU occurrence and intensity in and between BP4D+ [282] and GFT [93] databases, which consist of around 0.6 million annotated frames. Annotated video is not always possible or desirable. We introduce an unsupervised Branch-and-Bound framework to discover correlated facial actions in un-annotated video. We term this approach Common Event Discovery (CED). We evaluate CED in video and motion capture data. CED achieved moderate convergence with supervised approaches and enabled discovery of novel patterns occult to supervised approaches.
8

Visual Observation of Human Emotions / L'observation visuelle des émotions humaines

Jain, Varun 30 March 2015 (has links)
Cette thèse a pour sujet le développement de méthodes et de techniques permettant d'inférer l'état affectif d'une personne à partir d'informations visuelles. Plus précisement, nous nous intéressons à l'analyse d'expressions du visage, puisque le visage est la partie la mieux visible du corps, et que l'expression du visage est la manifestation la plus évidente de l'affect. Nous étudions différentes théories psychologiques concernant affect et émotions, et différentes facons de représenter et de classifier les émotions d'une part et la relation entre expression du visage et émotion sousjacente d'autre part. Nous présentons les dérivées Gaussiennes multi-échelle en tant que descripteur dímages pour l'estimation de la pose de la tête, pour la détection de sourire, puis aussi pour la mesure de l'affect. Nous utilisons l'analyse en composantes principales pour la réduction de la dimensionalité, et les machines à support de vecteur pour la classification et la regression. Nous appliquons cette même architecture, simple et efficace, aux différents problèmes que sont l'estimation de la pose de tête, la détection de sourire, et la mesure d'affect. Nous montrons que non seulement les dérivées Gaussiennes multi-échelle ont une performance supérieure aux populaires filtres de Gabor, mais qu'elles sont également moins coûteuses en calculs. Lors de nos expérimentations nous avons constaté que dans le cas d'un éclairage partiel du visage les dérivées Gaussiennes multi-échelle ne fournissent pas une description d'image suffisamment discriminante. Pour résoudre ce problème nous combinons des dérivées Gaussiennes avec des histogrammes locaux de type LBP (Local Binary Pattern). Avec cette combinaison nous obtenons des résultats à la hauteur de l'état de l'art pour la détection de sourire dans le base d'images GENKI qui comporte des images de personnes trouvées «dans la nature» sur internet, et avec la difficile «extended YaleB database». Pour la classification dans la reconnaissance de visage nous utilisons un apprentissage métrique avec comme mesure de similarité une distance de Minkowski. Nous obtenons le résultat que les normes L1 and L2 ne fournissent pas toujours la distance optimale; cet optimum est souvent obtenu avec une norme Lp où p n'est pas entier. Finalement, nous développons un système multi-modal pour la détection de dépressions nerveuses, avec en entrée des informations audio et vidéo. Pour la détection de mouvements intra-faciaux dans les données vidéo nous utilisons de descripteurs de type LBP-TOP (Local Binary Patterns -Three Orthogonal Planes), alors que nous utilisons des trajectoires denses pour les mouvements plus globaux, par exemple de la tête ou des épaules. Nous avons trouvé que les descripteurs LBP-TOP encodés avec des vecteurs de Fisher suffisent pour dépasser la performance de la méthode de reférence dans la compétition «Audio Visual Emotion Challenge (AVEC) 2014». Nous disposons donc d'une technique effective pour l'evaluation de l'état dépressif, technique qui peut aisement être étendue à d'autres formes d'émotions qui varient lentement, comme l'humeur (mood an Anglais). / In this thesis we focus on the development of methods and techniques to infer affect from visual information. We focus on facial expression analysis since the face is one of the least occluded parts of the body and facial expressions are one of the most visible manifestations of affect. We explore the different psychological theories on affect and emotion, different ways to represent and classify emotions and the relationship between facial expressions and underlying emotions. We present the use of multiscale Gaussian derivatives as an image descriptor for head pose estimation, smile detection before using it for affect sensing. Principal Component Analysis is used for dimensionality reduction while Support Vector Machines are used for classification and regression. We are able to employ the same, simple and effective architecture for head pose estimation, smile detection and affect sensing. We also demonstrate that not only do multiscale Gaussian derivatives perform better than the popular Gabor Filters but are also computationally less expensive to compute. While performing these experiments we discovered that multiscale Gaussian derivatives do not provide an appropriately discriminative image description when the face is only partly illuminated. We overcome this problem by combining Gaussian derivatives with Local Binary Pattern (LBP) histograms. This combination helps us achieve state-of-the-art results for smile detection on the benchmark GENKI database which contains images of people in the "wild" collected from the internet. We use the same description method for face recognition on the CMU-PIE database and the challenging extended YaleB database and our results compare well with the state-of-the-art. In the case of face recognition we use metric learning for classification, adopting the Minkowski distance as the similarity measure. We find that L1 and L2 norms are not always the optimum distance metrics and the optimum is often an Lp norm where p is not an integer. Lastly we develop a multi-modal system for depression estimation with audio and video information as input. We use Local Binary Patterns -Three Orthogonal Planes (LBP-TOP) features to capture intra-facial movements in the videos and dense trajectories for macro movements such as the movement of the head and shoulders. These video features along with Low Level Descriptor (LLD) audio features are encoded using Fisher Vectors and finally a Support Vector Machine is used for regression. We discover that the LBP-TOP features encoded with Fisher Vectors alone are enough to outperform the baseline method on the Audio Visual Emotion Challenge (AVEC) 2014 database. We thereby present an effective technique for depression estimation which can be easily extended for other slowly varying aspects of emotions such as mood.
9

Visual Observation of Human Emotions / L'observation visuelle des émotions humaines

Jain, Varun 30 March 2015 (has links)
Cette thèse a pour sujet le développement de méthodes et de techniques permettant d'inférer l'état affectif d'une personne à partir d'informations visuelles. Plus précisement, nous nous intéressons à l'analyse d'expressions du visage, puisque le visage est la partie la mieux visible du corps, et que l'expression du visage est la manifestation la plus évidente de l'affect. Nous étudions différentes théories psychologiques concernant affect et émotions, et différentes facons de représenter et de classifier les émotions d'une part et la relation entre expression du visage et émotion sousjacente d'autre part. Nous présentons les dérivées Gaussiennes multi-échelle en tant que descripteur dímages pour l'estimation de la pose de la tête, pour la détection de sourire, puis aussi pour la mesure de l'affect. Nous utilisons l'analyse en composantes principales pour la réduction de la dimensionalité, et les machines à support de vecteur pour la classification et la regression. Nous appliquons cette même architecture, simple et efficace, aux différents problèmes que sont l'estimation de la pose de tête, la détection de sourire, et la mesure d'affect. Nous montrons que non seulement les dérivées Gaussiennes multi-échelle ont une performance supérieure aux populaires filtres de Gabor, mais qu'elles sont également moins coûteuses en calculs. Lors de nos expérimentations nous avons constaté que dans le cas d'un éclairage partiel du visage les dérivées Gaussiennes multi-échelle ne fournissent pas une description d'image suffisamment discriminante. Pour résoudre ce problème nous combinons des dérivées Gaussiennes avec des histogrammes locaux de type LBP (Local Binary Pattern). Avec cette combinaison nous obtenons des résultats à la hauteur de l'état de l'art pour la détection de sourire dans le base d'images GENKI qui comporte des images de personnes trouvées «dans la nature» sur internet, et avec la difficile «extended YaleB database». Pour la classification dans la reconnaissance de visage nous utilisons un apprentissage métrique avec comme mesure de similarité une distance de Minkowski. Nous obtenons le résultat que les normes L1 and L2 ne fournissent pas toujours la distance optimale; cet optimum est souvent obtenu avec une norme Lp où p n'est pas entier. Finalement, nous développons un système multi-modal pour la détection de dépressions nerveuses, avec en entrée des informations audio et vidéo. Pour la détection de mouvements intra-faciaux dans les données vidéo nous utilisons de descripteurs de type LBP-TOP (Local Binary Patterns -Three Orthogonal Planes), alors que nous utilisons des trajectoires denses pour les mouvements plus globaux, par exemple de la tête ou des épaules. Nous avons trouvé que les descripteurs LBP-TOP encodés avec des vecteurs de Fisher suffisent pour dépasser la performance de la méthode de reférence dans la compétition «Audio Visual Emotion Challenge (AVEC) 2014». Nous disposons donc d'une technique effective pour l'evaluation de l'état dépressif, technique qui peut aisement être étendue à d'autres formes d'émotions qui varient lentement, comme l'humeur (mood an Anglais). / In this thesis we focus on the development of methods and techniques to infer affect from visual information. We focus on facial expression analysis since the face is one of the least occluded parts of the body and facial expressions are one of the most visible manifestations of affect. We explore the different psychological theories on affect and emotion, different ways to represent and classify emotions and the relationship between facial expressions and underlying emotions. We present the use of multiscale Gaussian derivatives as an image descriptor for head pose estimation, smile detection before using it for affect sensing. Principal Component Analysis is used for dimensionality reduction while Support Vector Machines are used for classification and regression. We are able to employ the same, simple and effective architecture for head pose estimation, smile detection and affect sensing. We also demonstrate that not only do multiscale Gaussian derivatives perform better than the popular Gabor Filters but are also computationally less expensive to compute. While performing these experiments we discovered that multiscale Gaussian derivatives do not provide an appropriately discriminative image description when the face is only partly illuminated. We overcome this problem by combining Gaussian derivatives with Local Binary Pattern (LBP) histograms. This combination helps us achieve state-of-the-art results for smile detection on the benchmark GENKI database which contains images of people in the "wild" collected from the internet. We use the same description method for face recognition on the CMU-PIE database and the challenging extended YaleB database and our results compare well with the state-of-the-art. In the case of face recognition we use metric learning for classification, adopting the Minkowski distance as the similarity measure. We find that L1 and L2 norms are not always the optimum distance metrics and the optimum is often an Lp norm where p is not an integer. Lastly we develop a multi-modal system for depression estimation with audio and video information as input. We use Local Binary Patterns -Three Orthogonal Planes (LBP-TOP) features to capture intra-facial movements in the videos and dense trajectories for macro movements such as the movement of the head and shoulders. These video features along with Low Level Descriptor (LLD) audio features are encoded using Fisher Vectors and finally a Support Vector Machine is used for regression. We discover that the LBP-TOP features encoded with Fisher Vectors alone are enough to outperform the baseline method on the Audio Visual Emotion Challenge (AVEC) 2014 database. We thereby present an effective technique for depression estimation which can be easily extended for other slowly varying aspects of emotions such as mood.
10

Computational Methods for the Study of Face Perception

Rivera, Samuel 19 December 2012 (has links)
No description available.

Page generated in 0.0987 seconds