Spelling suggestions: "subject:"fact""
1301 |
Effect of the Muslim headscarf on face perception : a series of psychological experiments looking at how the Muslim headscarf influences the perception of (South Asian) facesToseeb, Mohammed Umar January 2012 (has links)
The Muslim headscarf conceals the hair and other external features of a face. For this reason it may have implications for the recognition of such faces. The experiments reported in this thesis aimed to investigate anecdotal reports, which suggested that headscarf wearing females are more difficult to recognise. This was done by employing a series of experiments which involved a yes/no recognition task. The stimuli that were used were images of South Asian females who were photographed wearing a Muslim headscarf (HS), with their own hair visible (H), and a third set of stimuli were produced in which their external features were cropped (CR). Most importantly, participants either took part in the condition in which the state of the external features remained the same between the learning and test stage (Same) or the condition in which they were switched between the two stages (Switch). In one experiment participants completed a Social Contact Questionnaire. Surprisingly, in the Same condition, there was no difference in the recognition rates of faces that were presented with hair, with headscarf, or cropped faces. However, participants in the Switch condition performed significantly worse than those in the Same condition. It was also found that there was no difference in the % of fixations to the external features between the Same and Switch condition, which implied that the drop in performance between the two conditions was not mediated by eye-movements. These results suggest that the internal and external features of a face are processed interactively and, although the external features were not fixated on, a manipulation to them caused a drop in performance. This was confirmed in a separate experiment in which participants were unable to ignore the external features when they were asked to judge the similarity of the internal features of pairs of faces. Pairs of headscarf faces were rated as being more similar compared to pairs of faces with hair. Finally, for one group of participants it was found that contact with headscarf-wearing females was positively correlated with the recognition of headscarf-wearing faces. It was concluded that the headscarf per se did not impair face recognition and that there is enough information in the internal features of a face for optimal recognition, however, performance was disrupted when the presence or absence of the headscarf was manipulated.
|
1302 |
Detection of facial expressions based on time dependent morphological featuresBozed, Kenz Amhmed January 2011 (has links)
Facial expression detection by a machine is a valuable topic for Human Computer Interaction and has been a study issue in the behavioural science for some time. Recently, significant progress has been achieved in machine analysis of facial expressions but there are still some interestes to study the area in order to extend its applications. This work investigates the theoretical concepts behind facial expressions and leads to the proposal of new algorithms in face detection and facial feature localisation, design and construction of a prototype system to test these algorithms. The overall goals and motivation of this work is to introduce vision based techniques able to detect and recognise the facial expressions. In this context, a facial expression prototype system is developed that accomplishes facial segmentation (i.e. face detection, facial features localisation), facial features extraction and features classification. To detect a face, a new simplified algorithm is developed to detect and locate its presence from the fackground by exploiting skin colour properties which are then used to distinguish between face and non-face regions. This allows facial parts to be extracted from a face using elliptical and box regions whose geometrical relationships are then utilised to determine the positions of the eyes and mouth through morphological operations. The mean and standard deviations of segmented facial parts are then computed and used as features for the face. For images belonging to the same class, thses features are applied to the K-mean algorithm to compute the controid point of each class expression. This is repeated for images in the same expression class. The Euclidean distance is computed between each feature point and its cluster centre in the same expression class. This determines how close a facial expression is to a particular class and can be used as observation vectors for a Hidden Markov Model (HMM) classifier. Thus, an HMM is built to evaluate an expression of a subject as belonging to one of the six expression classes, which are Joy, Anger, Surprise, Sadness, Fear and Disgust by an HMM using distance features. To evaluate the proposed classifier, experiments are conducted on new subjects using 100 video clips that contained a mixture of expressions. The average successful detection rate of 95.6% is measured from a total of 9142 frames contained in the video clips. The proposed prototype system processes facial features parts and presents improved results of facial expressions detection rather than using whole facial features as proposed by previous authors. This work has resulted in four contributions: the Ellipse Box Face Detection Algorithm (EBFDA), Facial Features Distance Algorithm (FFDA), Facial features extraction process, and Facial features classification. These were tested and verified using the prototype system.
|
1303 |
A Cognitive Neuroscience of Social GroupsContreras, Juan Manuel 30 September 2013 (has links)
We used functional magnetic resonance imaging to investigate how the human brain processes information about social groups in three domains. Study 1: Semantic knowledge. Participants were scanned while they answered questions about their knowledge of both social categories and non-social categories like object groups and species of nonhuman animals. Brain regions previously identified in processing semantic information are more robustly engaged by nonsocial semantics than stereotypes. In contrast, stereotypes elicit greater activity in brain regions implicated in social cognition. These results suggest that stereotypes should be considered distinct from other forms of semantic knowledge. Study 2: Theory of mind. Participants were scanned while they answered questions about the mental states and physical attributes of individual people and groups. Regions previously associated with mentalizing about individuals were also robustly responsive to judgments of groups. However, multivariate searchlight analysis revealed that several of these regions showed distinct multivoxel patterns of response to groups and individual people. These findings suggest that perceivers mentalize about groups in a manner qualitatively similar to mentalizing about individual people, but that the brain nevertheless maintains important distinctions between the representations of such entities. Study 3: Social categorization. Participants were scanned while they categorized the sex and race of unfamiliar Black men, Black women, White men, and White women. Multivariate pattern analysis revealed that multivoxel patterns in FFA--but not other face-selective brain regions, other category-selective brain regions, or early visual cortex--differentiated faces by sex and race. Specifically, patterns of voxel-based responses were more similar between individuals of the same sex than between men and women, and between individuals of the same race than between Black and White individuals. These results suggest that FFA represents the sex and race of faces. Together, these three studies contribute to a growing cognitive neuroscience of social groups. / Psychology
|
1304 |
Guidelines to Overcome Cultural Barriers by Coping with the Power Distance for the Successful Project Implementation in ChinaGomootsukhavadee, Methavee, Tavera Cruz, José de Jesús January 2010 (has links)
Some authors have identified different cultural dimensions that could be used to classify people from different cultures in terms of values, believes, and behaviors which are defined by the environment where they are brought up. Among these dimensions, power distance is hi-lighted in this research in order to identify how the relationships of power among individuals would be a cause of cultural barriers that could ultimately prevent managers from the successful accomplishment of objectives. In the field of project management is during the project implementation when the interaction among stakeholders will reach its highest point of importance. This is due to the fact that all plans formulated in previous phases will be transformed into real actions. International project management required special skills and knowledge in order to achieve the right implementation of the strategy to accomplish a proper project completion. Therefore, this research is made in response to the need within the project management field for having a practical tool that could be applied to overcome cultural barriers. By gathering the point of view of managers from different backgrounds and ages, Chinese and non-Chinese, a list of practical guidelines is provided for foreigners to follow in order to avoid conflicts which could be initiated by cultural issues related to the way relationships of power are established in China. In order to develop these guidelines, a qualitative research method has been applied in this study by carrying out semi-structured interviews to a diversified group of people, Chinese and non-Chinese, in order to get the information from their experience to achieve the final objective of this research. After collecting the necessary information conveyed by interviewees, the suggested guidelines have been developed based on the findings which performed the factors of power and mechanism that describes how Face and Guanxi define the relationships of power among Chinese individuals.
|
1305 |
Predictive eyes precede retrieval : visual recognition as hypothesis testingHolm, Linus January 2007 (has links)
Does visual recognition entail verifying an idea about what is perceived? This question was addressed in the three studies of this thesis. The main hypothesis underlying the investigation was that visual recognition is an active process involving hypothesis testing. Recognition of faces (Study 1), scenes (Study 2) and objects (Study 3) was investigated using eye movement registration as a window on the recognition process. In Study 1, a functional relationship between eye movements and face recognition was established. Restricting the eye movements reduced recognition performance. In addition, perceptual reinstatement as indicated by eye movement consistency across study and test was related to recollective experience at test. Specifically, explicit recollection was related to higher eye movement consistency than familiarity-based recognition and false rejections (Studies 1-2). Furthermore, valid expectations about a forthcoming stimulus scene produced eye movements which were more similar to those of an earlier study episode, compared to invalid expectations (Study 2). In Study 3 participants recognized fragmented objects embedded in nonsense fragments. Around 8 seconds prior to explicit recognition, participants began to fixate the object region rather than a similar control region in the stimulus pictures. Before participants’ indicated awareness of the object, they fixated it with an average of 9 consecutive fixations. Hence, participants were looking at the object as if they had recognized it before they became aware of its identity. Furthermore, prior object information affected eye movement sampling of the stimulus, suggesting that semantic memory was involved in guiding the eyes during object recognition even before the participants were aware of its presence. Collectively, the studies support the view that gaze control is instrumental to visual recognition performance and that visual recognition is an interactive process between memory representation and information sampling.
|
1306 |
Production and properties of epitaxial graphene on the carbon terminated face of hexagonal silicon carbideHu, Yike 13 January 2014 (has links)
Graphene is widely considered to be a promising candidate for a new generation of electronics, but there are many outstanding fundamental issues that need to be addressed before this promise can be realized. This thesis focuses on the production and properties of graphene grown epitaxially on the carbon terminated face (C-face) of hexagonal silicon carbide leading to the construction of a novel graphene transistor structure. C-face epitaxial graphene multilayers are unique due to their rotational stacking that causes the individual layers to be electronically decoupled from each other. Well-formed C-face epitaxial graphene single layers have exceptionally high mobilities (exceeding 10,000 cm^2/Vs), which are significantly greater than those of Si-face graphene monolayers. This thesis investigates the growth and properties of C-face single layer graphene. A field effect transistor based on single layer graphene was fabricated and characterized for the first time. Aluminum oxide or boron nitride was used for the gate dielectric. Additionally, an all graphene/SiC Schottky barrier transistor on the C-face of SiC composed of 2DEG in SiC/Si2O3 interface and multilayer graphene contacts was demonstrated. A multiple growth scheme was adopted to achieve this unique structure.
|
1307 |
An investigation of young infants’ ability to match phonetic and gender information in dynamic faces and voicePatterson, Michelle Louise 11 1900 (has links)
This dissertation explores the nature and ontogeny of infants' ability to match
phonetic information in comparison to non-speech information in the face and voice.
Previous research shows that infants' ability to match phonetic information in face and
voice is robust at 4.5 months of age (e.g., Kuhl & Meltzoff, 1982; 1984; 1988; Patterson &
Werker, 1999). These findings support claims that young infants can perceive structural
correspondences between audio and visual aspects of phonetic input and that speech is
represented amodally. It remains unclear, however, specifically what factors allow
speech to be perceived amodally and whether the intermodal perception of other
aspects of face and voice is like that of speech. Gender is another biologically significant
cue that is available in both the face and voice. In this dissertation, nine experiments
examine infants' ability to match phonetic and gender information with dynamic faces
and voices.
Infants were seated in front of two side-by-side video monitors which displayed
filmed images of a female or male face, each articulating a vowel sound ( / a / or / i / ) in
synchrony. The sound was played through a central speaker and corresponded with
one of the displays but was synchronous with both. In Experiment 1,4.5-month-old
infants did not look preferentially at the face that matched the gender of the heard voice
when presented with the same stimuli that produced a robust phonetic matching effect.
In Experiments 2 through 4, vowel and gender information were placed in conflict to
determine the relative contribution of each in infants' ability to match bimodal
information in the face and voice. The age at which infants do match gender
information with my stimuli was determined in Experiments 5 and 6. In order to
explore whether matching phonetic information in face and voice is based on featural or
configural information, two experiments examined infants' ability to match phonetic
information using inverted faces (Experiment 7) and upright faces with inverted
mouths (Experiment 8). Finally, Experiment 9 extended the phonetic matching effect to
2-month-old infants. The experiments in this dissertation provide evidence that, at 4.5
months of age, infants are more likely to attend to phonetic information in the face and
voice than to gender information. Phonetic information may have a special salience
and/or unity that is not apparent in similar but non-phonetic events. The findings are
discussed in relation to key theories of perceptual development.
|
1308 |
Perception des visages auprès des adolescents et des adultes autistesMorin, Karine 06 1900 (has links)
La perception faciale est la mesure visuelle la plus commune pour mesurer les habiletés sociales chez les personnes autistes. Lorsque cette habileté est atypique, la nature de son origine est souvent contentieuse. Une hypothèse suggère qu’une analyse visuelle orientée localement dans l’autisme affecterait leur performance sur les tâches faciales lorsque l’analyse configurale est nécessaire. Objectif. Évaluer cette hypothèse en mesurant la discrimination de l’identité faciale avec des visages synthétiques présentés avec ou sans changement de point de vue, afin de minimiser ou non l’accès aux attributs locaux. Méthodes. Cinquante-huit participants, autistes et neurotypiques, appariés selon leur quotient intellectuel, genre et âge, ont accompli une tâche de discrimination de l’identité faciale similaire à celle de Habak, Wilkinson et Wilson (2008). Les stimuli étaient des visages synthétiques, présentés de face ou de profil, simplifiés et écologiquement validés. Les seuils de discrimination de l’identité faciale, pourcentage minimum de changement dans la géométrie faciale à 75 % de réponses correctes, ont été obtenus en utilisant un système à deux choix alternatifs. Résultats. Les analyses montraient une interaction significative entre les groupes et conditions, avec une différence significative entre les groupes pour la condition avec changement de point de vue, où la performance du groupe autiste était inférieure comparativement au groupe neurotypique. Discussion. La performance inférieure pour la condition avec changement de point de vue suggère que la discrimination de l’identité des visages est plus difficile chez les individus autistes lorsque l’accès aux éléments locaux est minimisé et lorsqu’une analyse globale des informations est nécessaire. / Face perception is the most commonly used visual metric of social abilities in autism. When found to be atypical, the nature of its origin is often contentious. One hypothesis proposes that locally-oriented visual analysis, which characterizes persons with autism, influences performance on most face tasks where configural analysis is optimal. Objective. We evaluate this hypothesis by assessing face identity discrimination with synthetic faces presented with and without changes in viewpoint, with the former condition minimizing access to local face attributes used for identity discrimination. Methods. Fifty eight participants, with and without autism, matched for global intellectual quotient, age, and gender, were asked to perform a face identity discrimination task similar to that of Habak, Wilkinson, and Wilson (2008). Stimuli were frontal and side viewpoint of simplified and ecologically validated synthetic faces. Face identity discrimination thresholds, defined by the minimum percentage of change in face geometry at 75% correct performance, were obtained using a two-alternative, temporal forced choice match-to-sample paradigm. Results. Analyses revealed a significant interaction effect between groups and conditions, with significant group differences found only for the viewpoint change condition, where performance of the autism group was significantly decreased compared to that of neurotypical participants. Discussion. The selective decrease in autism performance for the viewpoint change condition suggests that face identity discrimination in autism is more difficult when (i) access to local cues are minimized, and (ii) an increased dependence on integrative analysis is introduced to the face task used. / Co-auteurs de l'article: Karine Morin, Jacalyn Guy, Claudine Habak, Hugh, R. Wilson, Linda S.Pagani, Laurent Mottron, Armando Bertone
|
1309 |
Weakly Trained Parallel Classifier and CoLBP Features for Frontal Face Detection in Surveillance ApplicationsLouis, Wael 10 January 2011 (has links)
Face detection in video sequence is becoming popular in surveillance applications. The trade-off between obtaining discriminative features to achieve accurate detection versus computational overhead of extracting these features, which affects the classification speed, is a persistent problem. Two ideas are introduced to increase the features’ discriminative power. These ideas are used to implement two frontal face detectors examined on a 2D low-resolution surveillance sequence.
First contribution is the parallel classifier. High discriminative power features are achieved by fusing the decision from two different features trained classifiers where each type of the features targets different image structure. Accurate and fast to train classifier is achieved.
Co-occurrence of Local Binary Patterns (CoLBP) features is proposed, the pixels of the image are targeted. CoLBP features find the joint probability of multiple LBP features. These features have computationally efficient feature extraction and provide
high discriminative features; hence, accurate detection is achieved.
|
1310 |
Multilinear Subspace Learning for Face and Gait RecognitionLu, Haiping 19 January 2009 (has links)
Face and gait recognition problems are challenging due to largely varying appearances, highly complex pattern distributions, and insufficient training samples. This dissertation focuses on multilinear subspace learning for face and gait recognition, where low-dimensional representations are learned directly from tensorial face or gait objects.
This research introduces a unifying multilinear subspace learning framework for systematic treatment of the multilinear subspace learning problem. Three multilinear projections are categorized according to the input-output space mapping as: vector-to-vector projection, tensor-to-tensor projection, and tensor-to-vector projection. Techniques for subspace learning from tensorial data are then proposed and analyzed. Multilinear principal component analysis (MPCA) seeks a tensor-to-tensor projection that maximizes the variation captured in the projected space, and it is further combined with linear discriminant analysis and boosting for better recognition performance. Uncorrelated MPCA (UMPCA) solves for a tensor-to-vector projection that maximizes the captured variation in the projected space while enforcing the zero-correlation constraint. Uncorrelated multilinear discriminant analysis (UMLDA) aims to produce uncorrelated features through a tensor-to-vector projection that maximizes a ratio of the between-class scatter over the within-class scatter defined in the projected space. Regularization and aggregation are incorporated in the UMLDA solution for enhanced performance.
Experimental studies and comparative evaluations are presented and analyzed on the PIE and FERET face databases, and the USF gait database. The results indicate that the MPCA-based solution has achieved the best overall performance in various learning scenarios, the UMLDA-based solution has produced the most stable and competitive results with the same parameter setting, and the UMPCA algorithm is effective in unsupervised learning in low-dimensional subspace. Besides advancing the state-of-the-art of multilinear subspace learning for face and gait recognition, this dissertation also has potential impact in both the development of new multilinear subspace learning algorithms and other applications involving tensor objects.
|
Page generated in 0.0379 seconds