11 |
Using interference to track developmental changes in face processingKnowles, Mark Michael January 2011 (has links)
A series of six experiments are reported in this thesis: four investigated developmental trends in configural face processing using an interference paradigm, and two investigated developmental trends in face gender classification. Results from experiments 1-3 indicated that developmental differences in face recognition depend on the type of stimulus employed. Unlike young children, older children and adults suffered a significant decrease in both accuracy (face identification) and response time from whole, inner, and outer face recognition to whole, inner, and outer meld face recognition - where meld faces represent configurally disrupted faces. Results from experiment 4 indicated that this effect was significantly reduced when part faces rather than whole faces where presented at the encoding stage. These findings were interpreted using a concept of featural and configural processing within Valentine's (1991) face-space model. Results from experiments 5-6 indicated that unlike face recognition, configural disruption did not affect differences in performance across the age span. However, in line with the face recognition experiments, an outer face advantage was observed across stimulus type. These results were further accommodated into Valentines face-space model, opening up opportunity for further lines of enquiry.
|
12 |
What is special about faces? : examining face-categorisation with event-related potential measuresDering, Benjamin Robert January 2012 (has links)
Multiple independent lines of research have suggested that faces are a special class of stimulus. In the last 15 years, neuroimaging studies have shown greater activation to faces than to any other stimulus category in specific areas of cortex, leading to the idea that a portion of the fusiform gyrus, also known as the fusiform face are (FFA), is face-selective (Kanwisher, McDermott, & Chun, 1997). While findings from neuroimaging, behavioural, and lesion studies support the idea of a specialised visual system for faces, it is still debated whether face sensitivity arises from either an inherent face modular network or a general processing network manifesting perceptual expertise. A modular network is an abstract cognitive concept representing functions of the brain that require rapid, automatic cognitive processing. Modules are argued to be domain specific and information encapsulated such that they do not need to interact with other cognitive processes to function. In contrast to face modularity, the expertise account of face processing argues that faces recruit domain general processing mechanisms, which are not unique to faces, but finely tuned from extensive perceptual experience. In other words, the expertise account considers faces as stimuli for which almost everyone is a skilled expert. Attempts to make progress in the debate opposing domain specific (modular) vs. domain general (expertise-based) processing has led to investigations into neurophysiological indices of face processing. Alongside the vast behavioural literature portraying the human face as a unique and special visual stimulus, electrophysiological studies have focused on a 5 negative polarity component from the N1 family and peaking at around 170 rns, the N 170. The N 170 is maximal over parietal-occipital electrode sites, and widely acknowledged as largest in amplitude to faces (Bentin, Allison, Puce, Perez, & McCarthy, 1996). Since the seminal study by Bentin et al. (1996), it has been claimed repeatedly that no visual stimulus other than faces produces negativities as pronounced in the N1 range (Itier & Taylor, 2004a). So robust is this finding that the N170 face effect has been replicated and championed to the point where it is no longer considered a hypothetical effect but rather an established fact (Eimer, 2011; Rossion & Jacques, 2008). Like fMRl, electrophysiology cannot directly elucidate the debate concerning modularity vs. expertise-based processing, since face-selectivity, observable in ERPs as an amplitude increase for faces in the N1 range, can be predicted by both theoretical standpoints. In contrast to the majority of the ERP literature, there are however, instances where face-selectivity in the N170 range was not found, particularly in the case of studies comparing full- front views of objects such as cars and butterflies (Rossion, et al., 2000; Schweinberger, Huddy, & Burton, 2004; Thierry, Martin, Downing, & Pegna, 2007a). For instance, Thierry et al. (2007a) showed that inter-stimulus variability within an object class, a factor mixing physical and perceptual variance, modulates the amplitude of the N170 component. When comparing faces with other categories of object, previous studies have often used faces presented full front, in an upright orientation, whereas contrasting object classes have often been variable in size, background, orientation, viewpoint, etc. This may have lead to imbalanced experimental comparisons artificially increasing the N170 elicited by faces because of the low inter-stimulus G variance usually inbuilt for faces in the design. Thierry et al. (2007a) compared full front views of faces with full front views of cars or butterflies and found no significant mean amplitude differences between conditions in the N 170 range. Furthermore, they reported category-sensitivity unaffected by inter-stimulus variance 70 ms earlier, in the range of the P1 (-100 ms after stimulus onset). It is noteworthy that P1 face-sensitivity has been largely overlooked in previous research, despite some reports, which have highlighted such potential sensitivity (Herrmann, Ehlis, Ellgring, '& Fallgatter, 2005; Herrmann, Ehlis, Muehlberger, & Fallgatter, 2005; Linkenkaer-Hansen, et al., 1998). In sum, Thierry et al. (2007a) questioned the validity of object categorisation experiments, which used stimuli varying in many more ways than object category, particularly in terms of low-level perceptual features. Thierry et al.'s results (2007a) have been staunchly refuted (see (Bentin, et al., 2007; but also Thierry, Martin, Downing, & Pegna, 2007b). In fact, Rossion & Jacques (2008) dedicated a review article to the dismissing of the arguments put forward by Thierry et al. (2007a). In this publication, they present new data, using a design very similar to that of Thierry et al. (2007a) but displaying face-selectivity in the N 170 range. The conflicting findings of Rossion and Jacques, (2008) and Thierry et al. (2007a), and the heavy criticism by Rossion & Jacques (2008) of Thierry et al.'s conclusions, have created some confusion within the field, questioning the established view that N 170 reflects visual object categorisation. This thesis is concerned with the further characterization of stages in face processing as indexed by ERPs. Specifically, I question the point in time 7 at which ERP waveforms can detect the first observable differences between faces and other objects, and whether these differences are indicative of a specialised process dedicated to faces. I present a series of ERP experiments explicitly testing the category sensitivity of early ERP components, namely the P1 and the N170, since their functional significance remains poorly understood. A subtheme of the thesis is to determine whether differences in ERP component amplitude constitute a reliable measure of face (and -more generally- object category) sensitivity, and if so, whether these "differences are attributable to early object categorisation or higher level processes such as individual object recognition / identification. 1 More specifically, in the present work, the aim is to address the following questions: (1) Can Thierry et al. (2007a) be replicated, and does the task involved interact with the commonly accepted N 170 category-selectivity? (2) Does inter-stimulus perceptual variance affect/interact with the N170 face inversion effect? (3) Do any other perceptual parameters affect P1 & N 170 amplitude? For instance, does cropping faces out of heads modulate P1/N170? (4) If one creates face-car hybrid using morphing algorithms, do the P1 and N170 reflect the amount of face information present in the stimulus? (5) Can expertise with complex visual stimuli entirely account for the N 170 inversion effect? At this point, it is important to make a distinction between selectivity or specificity on the one hand and sensitivity on the other. To make a genuine 1 For the purpose of clarity throughout this thesis, I will refer to object categorisation when discussing the distinction between different categories of objects and to recognition or identification when discussing the extraction of higher-level properties such as ethnic origin, emotional expression, intention, age, gender, and even familiarity or identity. 8 - / claim regarding category selectivity/specificity", one would have to test objects from every single existing conceptual category in comparison to faces (in the present case). However, an ERP component can be sensitive to a particular category of objects when its amplitude and/or latency is modulated by categorical changes, without a need for exhaustively testing all existing categories, as if this was humanly possible. / 2 Throughout this thesis the terms selective/specific will be used when referring to previous research making claims in support of N170 face selectivity, whereas the term sensitivity will be used to refer to the present results and any conclusions drawn from them.
|
13 |
Facial signals of personality in humans and chimpanzeesKramer, Robin S. S. January 2012 (has links)
Recent evidence has begun to demonstrate that information regarding socially relevant traits is available from the static, neutral human face. In the current thesis, we replicated and extended previous research, showing that signals of personality and health were received by unfamiliar others. Further, these signals remained when information was limited to internal facial features, providing initial evidence of the location of these signals and the differing contributions of external and internal facial characteristics. By investigating the signal content of hem if aces, split vertically down the midline, we found asymmetries in the information signalled by the two sides of the face. While previous research has highlighted the role of the left hemiface in transitory signals of expression, we found that the right hemiface signalled more information regarding temporally stable personality traits. Given the similarities between humans and chimpanzees in facial morphology, face processing, and personality structure, we hypothesised an evolved system for signalling personality information that both species share. We provided the first evidence that personality information, in particular relating to dominance and extraversion, was indeed present in the chimpanzee face and could be accurately perceived by human observers. Our results support the idea that humans and chimpanzees share a system for signalling socially-relevant information from the face that dates back to our last common ancestor around six million years ago.
|
14 |
A statistical approach to facial identificationMorecroft, L. C. January 2009 (has links)
This thesis describes the development of statistical methods for facial identification. The objective is to provide a technique which can provide answers based on probabilities to the question of whether two images of a face are from the same person or whether there could be two different people whose facial images match equally well. The aim would be to contribute to evidence that an image captured, for example, at a crime scene by CCTV, is that of a suspect in custody. The methods developed are based on the underlying mathematics of faces (specifically the shape of the configuration of identified landmarks) At present expert witnesses carry out facial comparisons to assess how alike two faces are and their declared expert opinions are inevitably subjective. To develop the method a large population study was carried out to explore facial variation. Sets of measurements of landmarks were digitally taken from ≈3000 facial images and Procrustes analyses were performed to extract the underlying face shapes and used to estimate the parameters in statistical model for the population of face shapes. This allows pairs of faces to be compared in relation to population variability using a multivariate normal likelihood ratio (MVNLR) procedure. The MVNLR technique is a recognised means for evidence evaluation, and is widely used for example on trace evidence and DNA matching. However, many modifications and adaptations were required because of unique aspects of facial data such as high dimensionality, differential reliabilities of landmark identification and differential distinctiveness within the population of certain facial features. The thesis describes techniques of selection of appropriate landmarks and novel dimensionality reduction methods to accommodate these aspects involving non-sequential selection of principal components (to avoid ephemeral facial expressions) and balancing of measures of reliability against selectivity and specificity.
|
15 |
Emotive aspects of face perception and the human brainWinston, Joel Solomon January 2005 (has links)
The neural mechanisms by which faces are processed are the subject of great interest. A key characteristic of human faces is the ability to induce emotion in the viewer, through expressed emotion or other more abstract constructs such as trustworthiness or attractiveness. In this thesis, five functional magnetic resonance imaging experiments that probe the neural systems underpinning the perception of such emotive characteristics are described. I show that perception of emotional expression and identity are doubly dissociable with fusiform cortex encoding identity and superior temporal sulcus (STS) encoding expression. In subsequent experiments I explore the parameters under which distinct brain regions involved in emotional face perception engage, in particular addressing whether responses are automatic or dependent upon a particular task. The issue of whether distinct emotions are processed by different brain regions is considered and the basic stimulus property of spatial frequency is manipulated to address the idea of a subcortical visual pathway carrying emotional information. I describe two further experiments that address the more complex social constructs of attractiveness and trustworthiness, and demonstrate that broadly similar cortical circuitry is invoked when processing these attributes compared to basic facial emotions. Ultimately, a network of brain regions including amygdala, fusiform cortex, STS, and orbital and medial prefrontal cortex (OMPFC) is characterised as the substrate for emotional face perception. In general, I found that amygdala and fusiform responses to emotive faces are automatic, whereas STS and OMPFC responses show a greater degree of task-dependence. I interpret the amygdala response as an emotional labelling process, whereas fusiform enhancements to emotive faces probably reflect feedback from amygdala to modulate early face processing. STS responses indicate the encoding of specific facial expressions in this region and a wider role in intention detection. The response profile of the OMPFC is complex and I suggest multiple roles for this region in mediating an interaction between cognition and emotion.
|
16 |
Processing unfamiliar facesMegreya, Ahmed M. January 2005 (has links)
It is well established that matching unfamiliar faces is highly error prone, even under seemingly optimal conditions. This thesis shows large individual differences in unfamiliar face matching. Across several visual cognition tasks, the best predictor for this variability was recognition of inverted faces, regardless of whether they were familiar or unfamiliar. In stark contrast, there was no relationship between upright familiar and unfamiliar face processing. Moreover, the ability to match faces was unrelated to the ability to reject these faces, unless they were upright familiars. Therefore, the processes involved in upright unfamiliar face processing appeared to be qualitatively similar to those underlying the recognition of inverted familiar and unfamiliar faces, but very different to those responsible for upright familiar face processing. Finally, the presence of a second face severely impaired matching a target person, particularly when they were presented close together. The implications of these findings for the forensic field are discussed.
|
17 |
Cognitive representation of facial asymmetryWhite, David January 2008 (has links)
The human face displays mild asymmetry, with measurements of facial structure differing from left to right of the meridian by an average of three percent. Presently this source of variation is of theoretical interest primarily to researchers studying the perception of beauty, but a very limited amount of research has addressed the question of how this variation contributes to the cognitive processes underlying face recognition. This is surprising given that measurement of facial asymmetry can reliably distinguish between even the most similar of faces. Furthermore, brain regions responsible for symmetry detection support face-processing regions, and detection of symmetry is superior in upright faces relative to inverted and contrast-reversed face stimuli. In addition, facial asymmetry provides a useful biometric for automatic face recognition systems, and understanding the contribution of facial asymmetry in human face recognition may therefore inform the development of these systems. In this thesis the extent to which facial asymmetry is implicated in the process of recognition in human participants is quantified. By measuring the effect of left-right reversal on various tasks of face processing, the degree to which facial asymmetry is represented by memory is investigated. Marginal sensitivity to mirror reversal is demonstrated in a number of instances, and it is therefore concluded that cognitive representations of faces specify structural asymmetry. Reversal effects are typically slight however and on a number of occasions no reliable effect of this stimulus manipulation is detected. It is likely that a general tendency to treat mirror reversals as equivalent stimuli, in addition to an inability to recall lateral orientation of objects from memory, somewhat obscure the effect of reversal. The findings are discussed in the context of existing literature examining the way in which faces are cognitively represented.
|
18 |
Scanning, biases, and inhibition to visual stimuli in healthy and right hemisphere lesioned adultsButler, Stephen Hugh January 2007 (has links)
This thesis explores right hemisphere involvement in perceptual biases to chimeric faces and posterior right hemisphere involvement in response inhibition through an examination of the role of eye movements. Studies of patients with focal brain lesions and neuroimaging research indicate that face processing is predominantly based on right hemisphere function. Additionally, experiments using chimeric faces, where the left and the right hand side of the face are different, have shown that observers tend to bias their responses toward the information on the left. A series of experiments were conducted using lifelike gender based chimeric faces (Burt and Perrett, 1997) to explore the relationship between eye movements and perceptual biases. A left perceptual bias was observed in experiment 1, in that subjects based their gender decision significantly more frequently on the left side of the chimeric faces. Additionally, analysis of the eye movement patterns indicated a strong tendency to first fixate on the left side of the image and subsequently a relationship between perceptual biases and eye movements. Experiment 2 examined the issue of inversion of such facial stimuli and provided evidence that the right hemisphere may still be more influential in determining gender from inverted chimeric stimuli, as a significant left perceptual bias was demonstrated to these types of stimuli. It is proposed that the chimeric bias effects found in this experiment argue against the idea that inversion destroys the right hemisphere superiority for faces. Whilst experiments 1 and 2 provided evidence for right hemisphere dominance in the processing of chimeric faces, experiments 3 and 4 investigated the influence of eye movements and exposure duration in modulating the bias. Experiment 3 and 4 demonstrate that in younger adults but not older adults that a reliable leftward bias can be obtained when stimuli are exposed for brief durations only. However, evidence is provided that indicates that the perceptual bias is enhanced in the presence of eye movements. Additionally, experiment 4 shows that the perceptual bias is demonstrably diminished in older adults, possible mechanisms for this finding are discussed. Experiment 5 reviews evidence related to dysfunction in visual search in patients with right hemisphere lesions, however what is less well understood is how well such patients are able to inhibit a response in an otherwise simple search task. Experiments 5 and 6 explore oculomotor capture in such patients. Patients were asked to search for a colour target amongst distracters and to signal target location with a saccade. On each trial an additional distracter was presented which could be either similar or dissimilar to the target and appear either with or without a sudden onset. Patients were demonstrated to have higher oculomotor capture rates by the additional distracter, and to be more susceptible to the distracting influence of sudden onsets. Experiment 7 employed an antisaccade task and a fixation task and demonstrated in the same group of patients further impairments in response inhibition. In both tasks patients were demonstrated to have significant difficulty in inhibiting an eye movement to a peripheral distracter (relative to age matched controls). Results of experiments 5-7 indicate that patients with right hemisphere lesions that spare the frontal lobe have demonstrable impairments in inhibiting responses to suddenly appearing peripheral stimuli, implicating a role for posterior brain structures in this type of inhibition.
|
19 |
Sex differences and the role of sex hormones in face development and face processingMareckova, Klara January 2013 (has links)
Sex differences have been identified in both external appearance of faces (e.g. Bulygina et al., 2006; Weston et al., 2007) and the way information about faces is extracted by our brains, that is in face processing (e.g. Tahmasebi et al., 2012; Hampson et al., 2006). The mechanisms leading to the development of such sex differences are not well understood. This thesis explores the role of sex hormones in face development and face processing. Data from two large-scale studies (Saguenay Youth Study and Imagen, with n=1,000 and 2,000, respectively) and four smaller datasets (Cycle-Pill Study, n=20; Pill Study, n=20; First Impression Study, n=120, and Twin Study, n=119) were used to explore the effects of sex and sex hormones on face development (head MR images, MRI-face reconstruction) and face processing (functional MRI data, eye-tracking data). Shape of male and female faces was influenced by both prenatal and pubertal androgens. Facial signature of prenatal androgens, identified by the sex-discordant twin design, was found also in an independent dataset of female adolescents (singletons) and we showed that prenatal androgens, indexed indirectly by the facial signature, were associated with larger brain size. We propose that this facial signature might be used, similarly to digit ratio, as an indirect index of prenatal androgens. Variability in postnatal sex hormones due to the use of oral contraception and the phase of menstrual cycle influenced brain response to faces. Using the same dynamic face stimuli as in the functional magnetic resonance imaging (fMRI) study, we showed that eye-movements scanning the face did not differ between the users and non-users of oral contraception. We conclude that effects of sex hormones can be observed in both the face and the brain and that these effects help us understand sex differences in face shape and face processing. **This version does not contain the previously published journal articles reproduced in the printed thesis (appendices 1-3). For details see p. 188. **
|
20 |
Improving composite images of faces produced by eyewitnessesNess, Hayley January 2003 (has links)
When a witness views a crime, they are often asked to construct a facial likeness, or composite of the suspect. These composites are then used to stimulate recognition from someone who is familiar with the suspect. Facial composites are commonly used in large scale cases e. g. Jill Dando, Yorkshire Ripper, however a great deal of research has indicated that facial composites perform poorly and often do not portray an accurate likeness of the suspect. This thesis therefore examined methods of improving facial composites. In particular, it examined methods of increasing the likeness portrayed in composites, both during construction and at test. Experiments 1 to 3 examined the effectiveness of a new three-quarter-view database in PROfit. Experiment 1 examined whether the presentation of composites in a three-quarter- view composite will aid construction. Participant-witnesses were exposed to all views of a target and the results indicated that three-quarter-view composites performed as well as full-face composites but not better. Experiments 2 and 3 then examined whether the presentation of two composites (one in a full-face view and the other in a three-quarter-view) from the same participant-witness would increase performance above the level observed for a single composite. The results revealed that two views were better than one. In addition, experiment 3 examined the issue of encoding specificity and viewpoint dependency in composite construction. All participant-witnesses were exposed to either one view of a target (full-face or three-quarter) or all views and they were asked to construct both a full-face and a three-quarter- view composite. The results indicated that performance was better when all views of a face had been presented. When a target had been seen in a three-quarter-view, it was better to construct a three-quarter-view composite. However, when a target had been seen in a full-face view, performance for both full-face and three-quarter composites was poor. Experiments 4 to 8 examined whether the presentation of composites from multiple witnesses would increase performance. The results revealed that morphing composites from four different witnesses (4-Morphs) resulted in an image that performed as well as or better than the best single image. Further experimentation attempted to examine why multiple composites performed well. In particular, it was asked whether multiple composites performed well because they contained varied information or whether they performed well because they just contained more information. Multiple composites from both single and multiple witnesses using the same (PROfit) and different (PROfit, E-FIT, Sketch, EvoFIT) composite techniques were compared and the results revealed that multiple composites performed well because they contained different memorial representations. This combination of different memorial representations appeared to result in an image that was closer to the ideal, or prototypical image. Experiments 9 to 12 examined the relationship between verbal descriptions and composite quality. The results revealed that there was no clear relationship between the amount of description provided, the accuracy of the description and performance of the resulting composite. Further experimentation examined whether the presentation of a composite and a description would increase performance above the level observed for a single composite. The results revealed that the combination of a description and a composite from the same participant-witness did increase performance. This indicated that descriptions and composites might contain differing amounts and types of featural and configurational information. Both the theoretical and practical implications of these results are discussed. Experiments 1, 2 and 3 of this thesis have been submitted for publication. Ness, H., Hancock, P. J. B., Bowie, L. and Bruce, V. Are two views better than one? A study investigating recognition of full-face and three-quarter-view composites. Applied Cognitive Psychology. Experiment 4 of this thesis appears in Bruce, V., Ness, H., Hancock, P. J. B., Newman, C. and Rarity, J. (2002). Four heads are better than one: combining face composites results yields improvements in face likeness. Journal of Applied Psychology. 87 (5), 894-902. Other Publications Frowd, C. D., Carson, D., Ness, H., Richardson, J., Morrison, L., McLanaghan, S., Hancock, P. J. B. Evaluating Facial Composite Systems. Manuscript accepted for publication in Psychology, Crime and Law. Frowd, C. D., Carson, D., Ness, H., McQuiston, D., Richardson, J., Baldwin, H., Hancock, P. J. B. Contemporary Composite Techniques: The impact of a forensically relevant target delay. Manuscript accepted for publication in Legal and Criminological Psychology.
|
Page generated in 0.0156 seconds