Spelling suggestions: "subject:"coacial expressions"" "subject:"cracial expressions""
151 |
Japanese native perceptions of the facial expressions of American learners of L2 Japanese in specified contextsShelton, Abigail Leigh January 2018 (has links)
No description available.
|
152 |
Alexithymia Is Associated With Deficits in Visual Search for Emotional Faces in Clinical DepressionSuslow, Thomas, Günther, Vivien, Hensch, Tilman, Kersting, Anette, Bodenschatz, Charlott Maria 31 March 2023 (has links)
Background: The concept of alexithymia is characterized by difficulties identifying and
describing one’s emotions. Alexithymic individuals are impaired in the recognition of
others’ emotional facial expressions. Alexithymia is quite common in patients suffering
from major depressive disorder. The face-in-the-crowd task is a visual search paradigm
that assesses processing of multiple facial emotions. In the present eye-tracking study,
the relationship between alexithymia and visual processing of facial emotions was
examined in clinical depression.
Materials and Methods: Gaze behavior and manual response times of 20 alexithymic
and 19 non-alexithymic depressed patients were compared in a face-in-the-crowd task.
Alexithymia was empirically measured via the 20-item Toronto Alexithymia-Scale. Angry,
happy, and neutral facial expressions of different individuals were shown as target and
distractor stimuli. Our analyses of gaze behavior focused on latency to the target face,
number of distractor faces fixated before fixating the target, number of target fixations,
and number of distractor faces fixated after fixating the target.
Results: Alexithymic patients exhibited in general slower decision latencies compared
to non-alexithymic patients in the face-in-the-crowd task. Patient groups did not differ
in latency to target, number of target fixations, and number of distractors fixated prior
to target fixation. However, after having looked at the target, alexithymic patients fixated
more distractors than non-alexithymic patients, regardless of expression condition.
Discussion: According to our results, alexithymia goes along with impairments in
visual processing of multiple facial emotions in clinical depression. Alexithymia appears
to be associated with delayed manual reaction times and prolonged scanning after
the first target fixation in depression, but it might have no impact on the early search
phase. The observed deficits could indicate difficulties in target identification and/or
decision-making when processing multiple emotional facial expressions. Impairments
of alexithymic depressed patients in processing emotions in crowds of faces seem not
limited to a specific affective valence. In group situations, alexithymic depressed patients
might be slowed in processing interindividual differences in emotional expressions
compared with non-alexithymic depressed patients. This could represent a disadvantage
in understanding non-verbal communication in groups.
|
153 |
Image Emotion Analysis: Facial Expressions vs. Perceived ExpressionsAyyalasomayajula, Meghana 20 December 2022 (has links)
No description available.
|
154 |
Electronic Customer Knowledge Management Systems: a multimodal interaction approach : an empirical investigation into the role of the multimodal interaction metaphors to improve usability of Electronic Customer Knowledge Management Systems (ECKMS) and increase the user's trust, knowledge and acceptance.Alotaibi, Mutlaq B.G. January 2009 (has links)
There has been an increasing demand for commercial organisations to foster real-time interaction with customers, because harnessing customer competencies has been shown to be a major contributor towards various benefits, such as growth, innovation and competition. This may drive organisations to embrace the development of multimodal interaction and complement Electronic Customer Knowledge Management Systems (E-CKMS) with metaphors of audio-visual nature. Although the implementation of E-CKMS encounters several challenges, such as lack of trust and information overload, few empirical studies were devoted to assess the role of audio-visual metaphors, and investigate whether these technologies can be put into practice. Therefore, this thesis describes a comparative evaluation study carried out to examine the implication of incorporating multimodal metaphors into E-CKMS interfaces on not only usability of E-CKMS, but also the user¿s trust, knowledge and acceptance. An experimental E-CKMS platform was implemented with three different modes of interaction: Visual-only E-CKMS (VCKMS) with text and graphics, Multimodal E-CKMS (MCKMS) with speech, earcons and auditory icons and Avatar-enhanced multimodal E-CKMS (ACKMS). The three platforms were evaluated by three independent groups of twenty participants each (total=60) who carried out eight common tasks of increasing complexity and design based on three different styles. Another dependent group of forty-eight participants (n=48) was instructed to interact with the systems under similar usability conditions by performing six common tasks of two styles, and fill a questionnaire devised to measure the aspects of user acceptance. The results therein revealed that ACKMS was more usable and acceptable than both MCKMS and VCKMS, whereas MCKMS was more usable than VCKMS, but less acceptable. Inferential Statistics indicated that these results were statistically significant.
|
155 |
ERP Analyses of Perceiving Emotions and Eye Gaze in Faces: Differential Effects of Motherhood and High Autism TraitBagherzadeh-Azbari, Shadi 08 May 2023 (has links)
Die Blickrichtung und ihre Richtung sind wichtige nonverbale Hinweise für die Etablierung von sozialen Interaktionen und die Wahrnehmung von emotionalen Gesichtsausdrücken bei anderen. Ob der Blick direkt auf den Betrachter gerichtet ist (direkter Blick) oder abgewendet (abgewandter Blick), beeinflusst unsere soziale Aufmerksamkeit und emotionale Reaktionen. Dies deutet darauf hin, dass Emotionen und Blickrichtung informative Werte haben, die sich möglicherweise in frühen oder späteren Stadien der neurokognitiven Verarbeitung interagieren. Trotz theoretischer Grundlage, der geteilten Signal-Hypothese (Adams & Kleck, 2003), gibt es einen Mangel an strukturierten elektrophysiologischen Untersuchungen zu den Wechselwirkungen zwischen Emotionen und Blickrichtung sowie ihren neuronalen Korrelaten und wie sie sich in verschiedenen Bevölkerungsgruppen unterscheiden. Um diese Lücke zu schließen, verwendete diese Doktorarbeit ereigniskorrelierte Hirnpotentiale (ERPs), um die Reaktionen auf emotionale Ausdrücke und Blickrichtung in einem neuen Paradigma zu untersuchen, das statischen und dynamischen Blick mit Gesichtsausdrücken kombiniert. Es wurden drei verschiedene Populationen untersucht. Studie 1 untersuchte in einer normalen Stichprobe die Amplituden der ERP-Komponenten, die durch die erstmalige Präsentation von Gesichtern und nachfolgende Änderungen der Blickrichtung in der Hälfte der Durchgänge ausgelöst wurden. In Studie 2 wurden aufgrund der atypischen Gesichtsverarbeitung und verminderten Reaktionen auf Augenblick beim Autismus die ERPs und Augenbewegungen bei zwei Stichproben von Kindern mit unterschiedlichem Schweregrad ihrer Autismusmerkmale untersucht. In Studie 3 wurde in einer großen Stichprobe die vermutlich erhöhte Sensitivität bei der Emotionsverarbeitung und Reaktion auf Augenblick bei Müttern im postpartalen Zeitraum mit besonderem Fokus auf die Gesichter von Säuglingen untersucht. Zusammenfassend zeigen die Ergebnisse der drei Studien, dass in sozialen Interaktionen die emotionalen Effekte von Gesichtern durch die dynamische Blickrichtung moduliert werden. / The eye gaze and its direction are important and relevant non-verbal cues for the
establishment of social interactions and the perception of others’ emotional facial expressions. Gaze direction itself, whether eyes are looking straight at the viewer (direct gaze) or whether they look away (averted gaze), affects our social attention and emotional response. This implies that both emotion and gaze have informational values, which might interact at early or later stages of neurocognitive processing. Despite the suggestion of a theoretical basis for this interaction, the shared signal hypothesis (Adams & Kleck, 2003), there is a lack of structured electrophysiological investigations into the interactions between emotion and gaze and their neural correlates, and how they vary across populations. Addressing this need, the present doctoral dissertation used event-related brain potentials (ERPs) to study responses to emotional expressions and gaze direction in a novel paradigm combining static and dynamic gaze with facial expressions. The N170 and EPN were selected as ERP components believed to reflect gaze perception and reflexive attention, respectively. Three different populations were investigated. Study 1, in a normal sample, investigated the amplitudes of the ERP components elicited by the initial presentation of faces and subsequent changes of gaze direction in half of the trials. In Study 2, based on the atypical face processing and diminished responses to eye gaze in autism, the ERPs and eye movements were examined in two samples of children varying in the severity of their autism traits. In Study 3, In a large sample, I addressed the putatively increased sensitivity in emotion processing and response to eye gaze in mothers during their postpartum period with a particular focus on infant's faces. Taken together, the results from three studies demonstrate that in social interactions, the emotional effects of faces are modulated by dynamic gaze direction.
|
156 |
Recognizing Compound Facial Expressions of Virtual Characters in Augmented RealityKastemaa, Juho January 2017 (has links)
How is it possible to design virtual characters that can express different emotions such as compound emotions that are a mix of basic emotion expressions? Augmented reality (AR) can create engaging experiences for participants, and in recent years, virtual faces and virtual characters have become increasingly realistic and expressive, for example, when reducing costs using therapeutic applications. The validity of virtual expression has been shown in studies on desktop computers but less so in AR. In this paper, the basic emotions and the compound emotions of virtual characters were studied and the character was designed to work with Microsoft HoloLens in AR. The process of creating basic emotion blends to a human virtual character was created and the animations were modified using Unity3D game engine. The participants (n = 24) experienced the virtual character in a job interview context wearing HoloLens mixed reality glasses. The virtual character made basic and compound facial expressions and the participants were asked to label them. The result show that all participants successfully recognized seven basic emotions and seven compound emotions from a virtual character in AR using HoloLens; however, disgust was confused with sad, and angry was sometimes confused with disgust. Also, the fearfully surprised was often mistaken with awed. The result show that the compound emotions were recognized quite well and the results indicate that the perceived valence changes depending on the facial expressions. The study provides insights into how blended emotive expressions for virtual characters are created and perceived. / Hur är det möjligt att designa virtuella karaktärer som kan uttrycka olika känslor, till exempel blandade känslor som är en blandning av grundläggande känslor? Augmented Reality (AR) kan skapa engagerande upplevelser för deltagarna, och de senaste åren har virtuella ansikten och virtuella karaktärer blivit alltmer realistiska och uttrycksfulla, till exempel då kostnaderna minskas med hjälp av terapeutiska tillämpningar. Giltigheten av virtuellt uttryck har visats i studier på stationära datorer, men mindre i AR. I denna rapport studerades de grundläggande känslorna och hur de blandade känslorna av virtuella karaktären kan konstrueras för att fungera med Microsoft HoloLens i AR. Processen att skapa blandade känslor till en mänsklig virtuell karaktär konstruerades och animationerna modifierades med hjälp av Unity3D-spelmotor. Deltagarna (n = 24) upplevde den virtuella karaktären i en jobbintervju med Microsofts HoloLens. Den virtuella karaktären gjorde grundläggande och sammansatta ansiktsuttryck och deltagarna ombads kategorisera dom. Resultatet visar att alla deltagare framgångsrikt erkänt sju grundläggande känslor och sju blandade känslor från en virtuell karaktär i AR med hjälp av HoloLens; disgust var ihopblandad med sad, och angry var ibland ihopblandad med disgust. Fearfully surprised blev också ofta felaktig ihopblandad med awed. Studiens resultat visar att de blandade känslorna var lätta att känna igen och resultaten antyder på att den uppfattade valens förändras beroende på ansiktsuttryck. Studien ger insikter om hur blandade känslouttryck för virtuella karaktärer har konstruerats och uppfattats.
|
157 |
Construct validity, responsiveness and reliability of the Feline Grimace Scale© in kittensCheng, Alice J. 12 1900 (has links)
Cette étude prospective, randomisée et à l’aveugle a évalué la validité, la réactivité et fiabilité de l’échelle de grimaces félines (Feline Grimace Scale; FGS) chez les chatons.
Trente-six jeunes chattes en santé (âgées de 10 semaines à 6 mois) étaient filmées avant puis 1 et 2 h après ovariohystérectomie. La procédure a été effectuée avec un protocole d’anesthésie injectable (sans opioïde), avec ou sans analgésie multimodale. Les chatons en douleur étaient également filmés avant et 1h après avoir reçu une analgésie de secours (buprénorphine 0.02 mg/kg IM). Quatre évaluateurs, aveugles aux conditions expérimentales, ont évalué deux fois à cinq semaines d’intervalle les expressions faciales sur 111 images extraites des vidéos. Les cinq unités d’action (action unit; AU) de la FGS ont été évaluées (position des oreilles, serrage orbital, tension du museau, position des moustaches, position de la tête; avec un score possible de 0 à 2 pour chacune). La validité de construit, la réactivité et la fiabilité inter- et intra-évaluateur de la FGS ont été analysées en utilisant un modèle linéaire avec correction Benjamini–Hochberg, un test Wilcoxon signed-rank et un coefficient de corrélation intra-classes unique (ICCsingle), respectivement (P <0.05).
Les ratios des scores FGS totaux (médiane [étendue interquartile, EI]) étaient augmentés 1 et 2 h après l’ovariohystérectomie (médiane [EI] : 0.30 [0.20–0.40] et 0.30 [0.20–0.40], respectivement) comparativement à la mesure de base (médiane [EI] : 0.10 [0.00–0.30]) (P <0.001), et inférieurs après l’analgésie (médiane [EI] : 0.40 [0.20–0.50]) qu’avant son administration (médiane [EI] : 0.20 [0.10–0.38]) (P <0.001). Pour la fiabilité inter-évaluateur, les ICCsingle des ratios des scores FGS totaux étaient 0.68 et compris entre 0.35 et 0.70 pour chaque AU, individuellement. Pour la fiabilité intra-évaluateur, les ICCsingle des ratios des scores FGS totaux étaient compris entre 0.77–0.91 et 0.55–1.00 pour chaque AU.
La FGS est un outil d’évaluation de la douleur aiguë valide et réactif chez les chatons avec une fiabilité inter-évaluateur modérée et intra-évaluateur bonne à excellente. / This prospective, randomized, blinded study investigated the construct validity, responsiveness and reliability of the Feline Grimace Scale (FGS) in kittens.
Thirty-six healthy female kittens (aged 10 weeks to 6 months) were video recorded before, 1 and 2 h after ovariohysterectomy using an opioid-free injectable anesthetic protocol with or without multimodal analgesia. Painful kittens were additionally filmed before and 1 h after administration of rescue analgesia (buprenorphine 0.02 mg/kg IM). One hundred eleven facial images collected from video recordings were randomly scored by 4 observers, blinded to treatment groups and time points, twice with a 5 weeks interval using the FGS. The five action units (AU) of the FGS were scored (ear position, orbital tightening, muzzle tension, whiskers position and head position; 0–2 each). Construct validity, responsiveness, inter- and intra-rater reliability of the FGS were evaluated using linear models with Benjamini–Hochberg correction, Wilcoxon signed-rank test and single intra-class correlation coefficients (ICCsingle), respectively (P <0.05).
The FGS total ratio scores were higher 1 and 2 h after ovariohysterectomy (median [interquartile range, IQR]: 0.3 [0.20–0.40], and 0.30 [0.20–0.40], respectively) than at baseline (median [QR]: 0.10 [0.00–0.30]) (P <0.001), and lower after the administration of rescue analgesia (median [QR]: 0.40 [0.20–0.50]) than before (median [QR]: 0.20 [0.10–0.38]) (P <0.001). The inter-rater ICCsingle was 0.68 for the FGS total ratio scores and 0.35–0.70 for AUs considered individually. The intra-rater ICCsingle was 0.77–0.91 for the FGS total ratio scores and 0.55–1.00 for AUs considered individually.
The FGS is a valid and responsive acute pain scoring instrument with moderate inter-rater reliability and good to excellent intra-rater reliability in kittens.
|
158 |
The role of edutainment in e-learning : an empirical studyAyad, Khaled A. A. January 2011 (has links)
Impersonal, non-face-to-face contact and text-based interfaces, in the e-Learning segment, present major problems that are encountered by learners, since they are out on vital personal interactions and useful feedback messages, as well as on real-time information about their learning performance. This research programme suggests a multimodal, combined with an edutainment approach, which is expected to improve the communications between users and e-Learning systems. This thesis empirically investigates users’ effectiveness; efficiency and satisfaction, in order to determine the influence of edutainment, (e.g. amusing speech and facial expressions), combined with multimodal metaphors, (e.g. speech, earcon, avatar, etc.), within e-Learning environments. Besides text, speech, visual, and earcon modalities, avatars are incorporated to offer a visual and listening realm, in online learning. The methodology used for this research project comprises a literature review, as well as three experimental platforms. The initial experiment serves as a first step towards investigating the feasibility of completing all the tasks and objectives in the research project, outlined above. The remaining two experiments explore, further, the role of edutainment in enhancing e-Learning user interfaces. The overall challenge is to enhance user-interface usability; to improve the presentation of learning, in e-Learning systems; to improve user enjoyment; to enhance interactivity and learning performance; and, also, to contribute in developing guidelines for multimodal involvement, in the context of edutainment. The results of the experiments presented in this thesis show an improvement in user enjoyment, through satisfaction measurements. In the first experiment, the enjoyment level increased by 11%, in the Edutainment (E) platform, compared to the Non-edutainment (NE) interface. In the second experiment, the Game-Based Learning (GBL) interface obtained 14% greater enhancement than the Virtual Class (VC) interface and 20.85% more than the Storytelling interface; whereas, the percentage obtained by the game incorporated with avatars increased by an extra 3%, compared with the other platforms, in the third experiment. In addition, improvement in both user performance and learning retention were detected through effective and efficiency measurements. In the first experiment, there was no significant difference between mean values of time, for both conditions (E) & (NE) which were not found to be significant, when tested using T-test. In the second experiment, the time spent in condition (GBL) was higher by 7-10 seconds, than in the other conditions. In the third experiment, the mean values of the time taken by the users, in all conditions, were comparable, with an average of 22.8%. With regards to effectiveness, the findings of the first experiment showed, generally, that the mean correct answer for condition (E) was higher by 20%, than the mean for condition (NE). Users in condition (GBL) performed better than the users in the other conditions, in the second experiment. The percentage of correct answers, in the second experiment, was higher by 20% and by 34.7%, in condition (GBL), than in the (VC) and (ST), respectively. Finally, a set of empirically derived guidelines was produced for the design of usable multimodal e-Learning and edutainment interfaces.
|
159 |
L’utilisation de l’information visuelle en reconnaissance d’expressions faciales d’émotionBlais, Caroline 09 1900 (has links)
L’aptitude à reconnaitre les expressions faciales des autres est cruciale au succès des interactions sociales. L’information visuelle nécessaire à la catégorisation des expressions faciales d’émotions de base présentées de manière statique est relativement bien connue. Toutefois, l’information utilisée pour discriminer toutes les expressions faciales de base entre elle demeure encore peu connue, et ce autant pour les expressions statiques que dynamiques. Plusieurs chercheurs assument que la région des yeux est particulièrement importante pour arriver à « lire » les émotions des autres. Le premier article de cette thèse vise à caractériser l’information utilisée par le système visuel pour discriminer toutes les expressions faciales de base entre elles, et à vérifier l’hypothèse selon laquelle la région des yeux est cruciale pour cette tâche. La méthode des Bulles (Gosselin & Schyns, 2001) est utilisée avec des expressions faciales statiques (Exp. 1) et dynamiques (Exp. 2) afin de trouver quelles régions faciales sont utilisées (Exps. 1 et 2), ainsi que l’ordre temporel dans lequel elles sont utilisées (Exp. 2). Les résultats indiquent que, contrairement à la croyance susmentionnée, la région de la bouche est significativement plus utile que la région des yeux pour discriminer les expressions faciales de base. Malgré ce rôle prépondérant de la bouche, c’est toute de même la région des yeux qui est sous-utilisée chez plusieurs populations cliniques souffrant de difficultés à reconnaitre les expressions faciales. Cette observation pourrait suggérer que l’utilisation de la région des yeux varie en fonction de l’habileté pour cette tâche. Le deuxième article de cette thèse vise donc à vérifier comment les différences individuelles en reconnaissance d’expressions faciales sont reliées aux stratégies d’extraction de l’information visuelle pour cette tâche. Les résultats révèlent une corrélation positive entre l’utilisation de la région de la bouche et l’habileté, suggérant la présence de différences qualitatives entre la stratégie des patients et celle des normaux. De plus, une corrélation positive est retrouvée entre l’utilisation de l’œil gauche et l’habileté des participants, mais aucune corrélation n’est retrouvée entre l’utilisation de l’œil droit et l’habileté. Ces résultats indiquent que la stratégie des meilleurs participants ne se distingue pas de celle des moins bons participants simplement par une meilleure utilisation de l’information disponible dans le stimulus : des différences qualitatives semblent exister même au sein des stratégies des participants normaux. / The ability to recognize facial expressions is crucial for the success of social communication. The information used by the visual system to categorize static basic facial expressions is now relatively well known. However, the visual information used to discriminate the basic facial expressions from one another is still unknown, and this is true for both static and dynamic facial expressions. Many believe that the eye region of a facial expression is particularly important when it comes to reading others' emotions. The aim of the first article of this thesis is to determine which information is used by the visual system in order to discriminate between the basic facial expressions and to verify the validity of the hypothesis that the eye region is crucial for this task. The Bubbles method (Gosselin & Schyns, 2001) is used with static (Exp. 1) and dynamic (Exp. 2) facial expressions in order to determine which facial areas are used for the task (Exp. 1) and in which temporal order these facial areas are used (Exp. 2). The results show that, in contrast with the aforementioned belief, the mouth region is significantly more useful than the eye region when discriminating between the basic facial expressions. Despite this preponderant role of the mouth, it is the eye area⎯not the mouth area⎯that is underutilized by many clinical populations suffering from difficulties in recognizing facial expressions. This observation could suggest that the utilization of the eye area varies as a function of the ability to recognize facial expressions. The aim of the second article in this thesis is thus to verify how individual differences in the ability to recognize facial expressions relate to the visual information extraction strategies used for this task. The results show a positive correlation between the ability of the participants and the utilization of the mouth region, suggesting the existence of qualitative differences between the strategy of clinical patients and of normal participants. A positive correlation is also found between the ability of the participants and the utilization of the left eye area, but no correlation is found between the ability and the utilization of the right eye area. These results suggest that the difference between the strategies of the best and the worst participants is not only that the best ones use the information available in the stimulus more efficiently: rather, qualitative differences in the visual information extraction strategies may exist even within the normal population.
|
160 |
L’influence du vieillissement normal et pathologique sur le traitement des expressions faciales et du jugement de confianceÉthier-Majcher, Catherine 04 1900 (has links)
Déterminer si quelqu’un est digne de confiance constitue, tout au long de notre vie, une décision à la base de nos interactions sociales quotidiennes. Des études récentes chez les jeunes adultes ont proposé que le jugement de confiance basé sur un visage constituerait une extension des processus de reconnaissance des expressions faciales, particulièrement de la colère et de la joie (Todorov, 2008). Bien que le jugement de confiance soit d’une grande importance tout au long de notre vie, à notre connaissance, aucune étude n’a tenté d’explorer l’évolution de ce processus au cours du vieillissement. Pourtant, sachant que les personnes âgées saines sont moins efficaces que les jeunes adultes pour reconnaître les expressions faciales émotionnelles (Ruffman et al., 2008; Calder et al., 2003), des différences pourraient exister dans les capacités de ces deux groupes d’âge à poser un jugement de confiance. Le présent travail a permis d’explorer, pour une première fois, les processus perceptifs sous-jacents au jugement de confiance chez une population âgée saine ainsi que chez une population présentant une démence fronto-temporale. Les résultats démontrent que les représentations de colère, de joie et de confiance sont similaires chez les jeunes et les âgés sains et suggèrent qu’il existe bel et bien un lien entre le jugement de confiance et les jugements de joie et de colère. De plus, ils révèlent que ce lien persiste au cours vieillissement, mais que les adultes âgés sains se fient davantage à leur représentation de la colère que les jeunes adultes pour déterminer si un visage est digne de confiance ou non. Enfin, les patients présentant une démence fronto-temporale possèdent des représentations différentes des âgés sains en ce qui concerne la colère, la joie et la confiance, et ils semblent se fier davantage à leur représentation de la joie que les âgés sains pour déterminer le niveau de confiance d’un visage. / To determine whether someone looks trustworthy or not is, throughout our lives, a basic decision in our social interactions. Recent studies have suggested that this type of judgment may be an extension of facial expression judgments, more specifically of anger and happiness judgments (Todorov, 2008). Even though trustworthiness judgments play a great role in our social interactions throughout our lives, little is known about the evolution of this process through aging. However, knowing that older adults are less efficient than younger adults in identifying facial expressions (Ruffman et al., 2008; Calder et al., 2003), one could expect to find differences between young and older adults in the way they judge trustworthiness. This work aimed to explore, for the first time, perceptual processes underlying trustworthiness judgments in a healthy older adult population as well as in a population of fronto-temporal dementia (FTD) patients. Results show that anger, happiness and trustworthiness representations are similar between young and older adults, and they suggest that a relationship does exist between emotional judgments and trustworthiness judgments. Moreover, results show that this relationship persists throughout aging, but that older adults rely more on their representation of anger than younger in adults while judging trustworthiness. Finally, patients with fronto-temporal dementia show different representations of anger, happiness and trustworthiness than that of the controls. Also, for trustworthiness judgments, they rely more on their representation of happiness than controls.
|
Page generated in 0.1334 seconds