Spelling suggestions: "subject:"face perception."" "subject:"race perception.""
151 |
INFANTS’ PERCEPTION OF EMOTION FROM DYNAMIC BODY MOVEMENTSZieber, Nicole R. 01 January 2012 (has links)
In humans, the capacity to extract meaning from another person’s behavior is fundamental to social competency. Adults recognize emotions conveyed by body movements with comparable accuracy to when they are portrayed in facial expressions. While infancy research has examined the development of facial and vocal emotion processing extensively, no prior study has explored infants’ perception of emotion from body movements. The current studies examined the development of emotion processing from body gestures. In Experiment 1, I asked whether 6.5-month-olds infants would prefer to view emotional versus neutral body movements. The results indicate that infants prefer to view a happy versus a neutral body action when the videos are presented upright, but fail to exhibit a preference when the videos are inverted. This suggests that the preference for the emotional body movement was not driven by low-level features (such as the amount or size of the movement displayed), but rather by the affective content displayed.
Experiments 2A and 2B sought to extend the findings of Experiment 1 by asking whether infants are able to match affective body expressions to their corresponding vocal emotional expressions. In both experiments, infants were tested using an intermodal preference technique: Infants were exposed to a happy and an angry body expression presented side by side while hearing either a happy or angry vocalization. An inverted condition was included to investigate whether matching was based solely upon some feature redundantly specified across modalities (e.g., tempo). In Experiment 2A, 6.5-month-old infants looked longer at the emotionally congruent videos when they were presented upright, but did not display a preference when the same videos were inverted. In Experiment 2B, 3.5-month-olds tested in the same manner exhibited a preference for the incongruent video in the upright condition, but did not show a preference when the stimuli were inverted. These results demonstrate that even young infants are sensitive to emotions conveyed by bodies, indicating that sophisticated emotion processing capabilities are present early in life.
|
152 |
EvoFIT : a holistic, evolutionary facial imaging systemFrowd, Charlie David January 2002 (has links)
This thesis details the development and evaluation of a new photofitting approach. The motivation for this work is that current photofit systems used by the police - whether manual or computerized - do not appear to work very well. Part of the problem with these approaches is they involve a single facial representation that necessitates a verbal interaction. When a multiple presentation is considered, our innate ability to recognize faces is capitalized (and the potentially disruptive effect of the verbal component is reduced). The approach works by employing Genetic Algorithms to evolve a small group of faces to be more like a desired target. The main evolutionary influence is via user input that specifies the similarity of the presented images with the target under construction. The thesis follows three main phases of development. The first involves a simple system modelling the internal components of a face (eyes, eyebrows, nose and mouth) containing features in a fixed relationship with each other. The second phase applies external facial features (hair and ears) along with an appropriate head shape and changes in the relationship between features. That the underlying model is based on Principal Components Analysis captures the statistics of how faces vary in terms of shading, shape and the relationship between features. Modelling was carried out in this way to create more realistic looking photofits and to guard against implausible featural relationships possible with traditional approaches. The encouraging results of these two sections prompted the development of a full photofit system: EvoFIT. This software is shown to have continued promise both in the lab and in a real case. Future work is directed particularly at resolving issues concerning the anonymity of the database faces and the creation of photofits from the subject's memory of a target.
|
153 |
Facial motion perception in autism spectrum disorder and neurotypical controlsGirges, Christine January 2015 (has links)
Facial motion provides an abundance of information necessary for mediating social communication. Emotional expressions, head rotations and eye-gaze patterns allow us to extract categorical and qualitative information from others (Blake & Shiffrar, 2007). Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterised by a severe impairment in social cognition. One of the causes may be related to a fundamental deficit in perceiving human movement (Herrington et al., 2007). This hypothesis was investigated more closely within the current thesis. In neurotypical controls, the visual processing of facial motion was analysed via EEG alpha waves. Participants were tested on their ability to discriminate between successive animations (exhibiting rigid and nonrigid motion). The appearance of the stimuli remained constant over trials, meaning decisions were based solely on differential movement patterns. The parieto-occipital region was specifically selective to upright facial motion while the occipital cortex responded similarly to natural and manipulated faces. Over both regions, a distinct pattern of activity in response to upright faces was characterised by a transient decrease and subsequent increase in neural processing (Girges et al., 2014). These results were further supported by an fMRI study which showed sensitivity of the superior temporal sulcus (STS) to perceived facial movements relative to inanimate and animate stimuli. The ability to process information from dynamic faces was assessed in ASD. Participants were asked to recognise different sequences, unfamiliar identities and genders from facial motion captures. Stimuli were presented upright and inverted in order to assess configural processing. Relative to the controls, participants with ASD were significantly impaired on all three tasks and failed to show an inversion effect (O'Brien et al., 2014). Functional neuroimaging revealed atypical activities in the visual cortex, STS and fronto-parietal regions thought to contain mirror neurons in participants with ASD. These results point to a deficit in the visual processing of facial motion, which in turn may partly cause social communicative impairments in ASD.
|
154 |
An investigation into the parameters influencing neural network based facial recognition05 September 2012 (has links)
D.Ing. / This thesis deals with an investigation into facial recognition and some variables that influence the performance of such a system. Firstly there is an investigation into the influence of image variability on the overall recognition performance of a system and secondly the performance and subsequent suitability of a neural network based system is tested. Both tests are carried out on two distinctly different databases, one more variable than the other. The results indicate that the greater the amount of variability the more negatively affected is the performance rating of a specific facial recognition system. The results further indicate the success with the implementation of a neural network system over a more conventional statistical system.
|
155 |
Face processing in persons with and without Alzheimer's diseaseUnknown Date (has links)
This study aimed to understand the differences in strength or coordination of brain regions involved in processing faces in the presence of aging and/or progressing neuropathology (Alzheimer's disease). To this end, Experiment 1 evaluated age-related differences in basic face processing and the effects of familiarity in face processing. Overall, face processing in younger (22-35yrs) and older participants (63-83yrs) recruited a broadly distributed network of brain activity, but the distribution of activity varied depending on the age of the individual. The younger population utilized regions of the occipitotemporal, medial frontal and posterior parietal cortices while the older population recruited a concentrated occipitotemporal network. The younger participants were also sensitive to the type of face presented, as Novel faces were associated with greater mean BOLD activity than either the Famous or Relatives faces. Interestingly, Relatives faces were associated with greater mean B OLD activity in more regions of the brain than found in any other analysis in Exp. 1, spanning the inferior frontal, medial temporal and inferior parietal cortices. In contrast, the older adults were not sensitive to the type of face presented, which could reflect a difference in cognitive strategies used by the older population when presented with this type of face stimuli. Experiment 2 evaluated face processing, familiarity in face processing and also emphasized the interactive roles autobiographical processing and memory recency play in processing familiar faces in mature adults (MA; 45-55yrs), older adults (OA; 70-92yrs) and patients suffering from Alzheimer's disease (AD; 70-92yrs). / MA participants had greater mean BOLD activity values in more regions of the brain than observed in either of the older adult populations, spanning regions of the medial frontal, medial temporal, inferior parietal and occipital cortices. OA, in contrast, utilized a concentrated frontal and medial temporal network and AD participants had the greatest deficit in BOLD activity overall.Age-related differences in processing faces, in processing the type of face presented, in autobiographical information processing and in processing the recency of a memory were noted, as well as differences due to the deleterious effects of AD. / by Jeanna Winchester. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
|
156 |
The Happiness/Anger Superiority Effect: the influence of the gender of perceiver and poser in facial expression recognitionUnknown Date (has links)
Two experiments were conducted to investigate the impact of poser and perceiver gender on the Happiness/Anger Superiority effect and the Female Advantage in facial expression recognition. Happy, neutral, and angry facial expressions were presented on male and female faces under Continuous Flash Suppression (CFS). Participants of both genders indicated when the presented faces broke through the suppression. In the second experiment, angry and happy expressions were reduced to 50% intensity. At full intensity, there was no difference in the reaction time for female neutral and angry faces, but male faces showed a difference in detection between all expressions. Across experiments, male faces were detected later than female faces for all facial expressions. Happiness was generally detected faster than anger, except when on female faces at 50% intensity. No main effect for perceiver gender emerged. It was concluded that happiness is superior to anger in CFS, and that poser gender affects facial expression recognition. / by Sophia Peaco. / Thesis (M.A.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
|
157 |
Visual uncertainty in serial dependence : facing noise / Visuell osäkerhet vid seriellt beroende : effekt av brusLidström, Anette January 2019 (has links)
Empirical evidence suggests that the visual system uses prior visual information to predict the future state of the world. This is believed to occur through an information integration mechanism known as serial dependence. Current perceptions are influenced by prior visual information in order to create perceptual continuity in an everchanging noisy environment. Serial dependence has been found to occur for both low-level stimuli features (e.g., numerosity, orientation) and high-level stimuli like faces. Recent evidence indicates that serial dependence for low-level stimuli is affected by current stimulus reliability. When current stimuli are low in reliability, the perceptual influence from previously viewed stimuli is stronger. However, it is not clear whether stimulus reliability also affects serial dependence for high-level stimuli like faces. Faces are highly complex stimuli which are processed differently from other objects. Additionally, face perception is suggested to be especially vulnerable to external visual noise. Here, I used regular and visually degraded face stimuli to investigate whether serial dependence for faces is affected by stimulus reliability. The results showed that previously viewed degraded faces did not have a very strong influence on perceptions of currently viewed regular faces. In contrast, when currently viewed faces were degraded, the perceptual influence from previously viewed regular faces was rather strong. Surprisingly, there was a quite strong perceptual influence from previously viewed faces on currently viewed faces when both faces were degraded. This could mean that the effect of stimulus reliability in serial dependence for faces is not due to encoding disabilities, but rather a perceptual choice.
|
158 |
"I distinctly remember you!": an investigation of memory for faces with unusual featuresUnknown Date (has links)
Many errors in recognition are made because various features of a stimulus are attended inefficiently. Those features are not bound together and can then be confused with other information. One of the most common types of these errors is conjunction errors. These happen when mismatched features of memories are combined to form a composite memory. This study tests how likely conjunction errors, along with other recognition errors, occur when participants watch videos of people both with and without unusual facial features performing actions after a week time lag. It was hypothesized that participants would falsely recognize actresses in the conjunction item condition over the other conditions. The likelihood of falsely recognizing a new person increased when presented with a feature, but the conjunction items overall were most often falsely recognized. / by Autumn Keif. / Thesis (M.A.)--Florida Atlantic University, 2012. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2012. Mode of access: World Wide Web.
|
159 |
Assessing Children’s Performance on the Facial Emotion Recognition Task with Familiar and Unfamiliar Faces: An Autism StudyUnknown Date (has links)
Studies exploring facial emotion recognition (FER) abilities in autism spectrum
disorder (ASD) samples have yielded inconsistent results despite the widely-accepted
finding that an impairment in emotion recognition is a core component of ASD. The
current study aimed to determine if an FER task featuring both unfamiliar and familiar
faces would highlight additional group differences between ASD children and typically
developing (TD) children. We tested the two groups of 4- to 8-year-olds on this revised
task, and also compared their resting-state brain activity using electroencephalogram
(EEG) measurements. As hypothesized, the TD group had significantly higher overall
emotion recognition percent scores. In addition, there was a significant interaction effect
of group by familiarity, with the ASD group recognizing emotional expressions
significantly better in familiar faces than in unfamiliar ones. This finding may be related
to the preference of children with autism for people and situations which they are accustomed to. TD children did not demonstrate this pattern, as their recognition scores
were approximately the same for familiar faces and unfamiliar ones. No significant group
differences existed for EEG alpha power or EEG alpha asymmetry in frontal, central,
temporal, parietal, or occipital brain regions. Also, neither of these EEG measurements
were strongly correlated with the group FER performances. Further evidence is needed to
assess the association between neurophysiological measurements and behavioral
symptoms of ASD. The behavioral results of this study provide preliminary evidence that
an FER task featuring both familiar and unfamiliar expressions produces a more optimal
assessment of emotion recognition ability. / Includes bibliography. / Thesis (M.A.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
|
160 |
2D/3D face recognitionUnknown Date (has links)
This dissertation introduces our work on face recognition using a novel approach based on creating 3D face model from 2D face images. Together with the pose angle estimation and illumination compensation, this method can be used successfully to recognize 2D faces with 3D recognition algorithms. The results reported here were obtained partially with our own face image database, which had 2D and 3D face images of 50 subjects, with 9 different pose angles. It is shown that by applying even the simple PCA algorithm, this new approach can yield successful recognition rates using 2D probing images and 3D gallery images. The insight gained from the 2D/3D face recognition study was also extended to the case of involving 2D probing and 2D gallery images, which offers a more flexible approach since it is much easier and practical to acquire 2D photos for recognition. To test the effectiveness of the proposed approach, the public AT&T face database, which had 2D only face photos of 40 subjects, with 10 different images each, was utilized in the experimental study. The results from this investigation show that with our approach, the 3D recognition algorithm can be successfully applied to 2D only images. The performance of the proposed approach was further compared with some of the existing face recognition techniques. Studies on imperfect conditions such as domain and pose/illumination variations were also carried out. Additionally, the performance of the algorithms on noisy photos was evaluated. Pros and cons of the proposed face recognition technique along with suggestions for future studies are also given in the dissertation. / by Guan Xin. / Thesis (Ph.D.)--Florida Atlantic University, 2012. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2012. Mode of access: World Wide Web.
|
Page generated in 0.0964 seconds