Spelling suggestions: "subject:"fact""
831 |
Image size and resolution in face recognition /Bilson, Amy Jo. January 1987 (has links)
Thesis (Ph. D.)--University of Washington, 1987. / Vita. Bibliography: leaves [115]-121.
|
832 |
An analysis of the metrical and morphological features of South African black males for the purpose of facial identificationRoelofse, Michelle Marizan January 2006 (has links)
Thesis (MSc.(Anatomy)--Faculty of Health Sciences)-University of Pretoria, 2006. / Includes bibliographical references.
|
833 |
Racial categorization of ethnically ambiguous faces and the cross-race effectBaldwin, Shaun. January 2007 (has links)
Thesis (M.S.)--Villanova University, 2007. / Psychology Dept. Includes bibliographical references.
|
834 |
A new biologically motivated framework for robust object recognitionSerre, Thomas, Wolf, Lior, Poggio, Tomaso 14 November 2004 (has links)
In this paper, we introduce a novel set of features for robust object recognition, which exhibits outstanding performances on a variety ofobject categories while being capable of learning from only a fewtraining examples. Each element of this set is a complex featureobtained by combining position- and scale-tolerant edge-detectors overneighboring positions and multiple orientations.Our system - motivated by a quantitative model of visual cortex -outperforms state-of-the-art systems on a variety of object imagedatasets from different groups. We also show that our system is ableto learn from very few examples with no prior category knowledge. Thesuccess of the approach is also a suggestive plausibility proof for aclass of feed-forward models of object recognition in cortex. Finally,we conjecture the existence of a universal overcompletedictionary of features that could handle the recognition of all objectcategories.
|
835 |
Error weighted classifier combination for multi-modal human identificationIvanov, Yuri, Serre, Thomas, Bouvrie, Jacob 14 December 2005 (has links)
In this paper we describe a technique of classifier combination used in a human identification system. The system integrates all available features from multi-modal sources within a Bayesian framework. The framework allows representinga class of popular classifier combination rules and methods within a single formalism. It relies on a Âper-class measure of confidence derived from performance of each classifier on training data that is shown to improve performance on a synthetic data set. The method is especially relevant in autonomous surveillance setting where varying time scales and missing features are a common occurrence. We show an application of this technique to the real-world surveillance database of video and audio recordings of people collected over several weeks in the office setting.
|
836 |
Traumatologie d'urgence en oto-rhino-laryngologie.Souterelle, Pierre. January 1900 (has links)
Thèse--Méd.--Reims, 1973. N°: N° 12. / Bibliogr. ff. I-VI.
|
837 |
Υλοποίηση αλγορίθμου αναγνώρισης προσώπου (face recognition) σε έξυπνη κάμεραΠαναγιωτόπουλος, Λεωνίδας 04 October 2011 (has links)
Ο σκοπός της διπλωματικής εργασίας ήταν η βελτιστοποίηση ενός αλγορίθμου αναγνώρισης ανθρώπινων προσώπων και η εφαρμογή του σε μια έξυπνη κάμερα. Για την αναγνώριση των προσώπων χρησιμοποιήσαμε τον αλγόριθμο PCA ( Αλγόριθμος ανάλυσης κύριων συνιστωσών ). Η εφαρμογή του αλγορίθμου έγινε σε μια εργαστηριακή έξυπνη κάμερα εξοπλισμένη με τον επεξεργαστή LEON 2 όπως επίσης και με ενσωματωμένη μνήμη Sdram μεγέθους 16Mbytes.
Η βελτιστοποίηση του αλγορίθμου έγινε με γνώμονα την δυνατότητα εφαρμογής του στην έξυπνη κάμερα. Έτσι η υλοποίηση έγινε αρχικά στο περιβάλλον Matlab στην συνέχεια υλοποιήθηκε σε C γλώσσα προγραμματισμού ενώ τέλος εφαρμόστηκε, μετά από κατάλληλες παραμετροποιήσεις, στην έξυπνη κάμερα. Η έξυπνη κάμερα είναι δυνατόν να καταγράφει και να αναγνωρίζει πρόσωπα με ικανοποιητική ακρίβεια, σε χρόνο μικρότερο του ενός δευτερολέπτου.
Τα αποτελέσματα ήταν αρκετά ικανοποιητικά καθώς η κάμερα μπορεί και αναγνωρίζει συγκεκριμένα πρόσωπα, μέσα από ένα σύνολο ανθρώπων που παρακολουθεί. Κύριο πλεονέκτημα της υλοποίησης είναι η μεταφερσιμότητά της που την καθιστά εύχρηστη σε πολλές εφαρμογές που απαιτούν αναγνώριση προσώπων, καθώς επίσης θα μπορούσε να αποτελέσει την βάση για επιπλέον εφαρμογές που θα αποσκοπούσαν στην αναγνώριση διαφορετικών ειδώλων. / The propose of this thesis was the optimizing of an algorithm for Face Recognition, and the implementation of this algorithm to a smart camera. In order to identify the Faces we use the PCA algorithm (Principal Component Analysis) . The implementation of the algorithm was done in a laboratory smart camera, equipped with a LEON2 processor as well as embedded 16Mbytes Sdram memory.
The optimization of the algorithm was based on the applicability in the smart camera. This implementation was done originally in Matlab environment, then implemented in C programming code and after the appropriate configurations applied to the smart camera. The smart camera can record and recognize faceswith sufficient accuracy, in less than one second.
The results were quite good as the camera can also recognize individual Faces within a group of observed people. Main advantage of the implementation is the portability, which makes it useful in many applications that require identification of persons. Also could be the basis for further applications aimed at identifying different kind of images.
|
838 |
Studies of emotion recognition from multiple communication channelsDurrani, Sophia J. January 2005 (has links)
Crucial to human interaction and development, emotions have long fascinated psychologists. Current thinking suggests that specific emotions, regardless of the channel in which they are communicated, are processed by separable neural mechanisms. Yet much research has focused only on the interpretation of facial expressions of emotion. The present research addressed this oversight by exploring recognition of emotion from facial, vocal, and gestural tasks. Happiness and disgust were best conveyed by the face, yet other emotions were equally well communicated by voices and gestures. A novel method for exploring emotion perception, by contrasting errors, is proposed. Studies often fail to consider whether the status of the perceiver affects emotion recognition abilities. Experiments presented here revealed an impact of mood, sex, and age of participants. Dysphoric mood was associated with difficulty in interpreting disgust from vocal and gestural channels. To some extent, this supports the concept that neural regions are specialised for the perception of disgust. Older participants showed decreased emotion recognition accuracy but no specific pattern of recognition difficulty. Sex of participant and of actor affected emotion recognition from voices. In order to examine neural mechanisms underlying emotion recognition, an exploration was undertaken using emotion tasks with Parkinson's patients. Patients showed no clear pattern of recognition impairment across channels of communication. In this study, the exclusion of surprise as a stimulus and response option in a facial emotion recognition task yielded results contrary to those achieved without this modification. Implications for this are discussed. Finally, this thesis gives rise to three caveats for neuropsychological research. First, the impact of the observers' status, in terms of mood, age, and sex, should not be neglected. Second, exploring multiple channels of communication is important for understanding emotion perception. Third, task design should be appraised before conclusions regarding impairments in emotion perception are presumed.
|
839 |
Processing of emotional material in major depression : cognitive and neuropsychological investigationsRidout, Nathan January 2005 (has links)
The aim of this thesis was to expand the existing knowledge base concerning the profile of emotional processing that is associated with major depression, particularly in terms of socially important non-verbal stimuli (e.g. emotional facial expressions). Experiment one utilised a face-word variant of the emotional Stroop task and demonstrated that depressed patients (DP) did not exhibit a selective attention bias for sad faces. Conversely, the healthy controls (HC) were shown to selectively attend to happy faces. At recognition memory testing, DP did not exhibit a memory bias for depression-relevant words, but did demonstrate a tendency to falsely recognise depression-relevant words that had not been presented at encoding. Experiment two examined the pattern of autobiographical memory (ABM) retrieval exhibited by DP and HC in response to verbal (words) and non-verbal (images & faces) affective cues. DP were slower than HC to retrieve positive ABMs, but did not differ from HC in their retrieval times for negative ABMs. Overall, DP retrieved fewer specific ABMs than did the HC. Participants retrieved more specific ABMs to image cues than to words or faces, but this pattern was only demonstrated by the HC. Reduced retrieval of specific ABMs by DP was a consequence of increased retrieval of categorical ABMs; this tendency was particularly marked when the participants were cued with faces. During experiment three, DP and HC were presented with a series of faces and were asked to identify the gender of the person featured in each photograph. Overall, gender identification times were not affected by the emotion portrayed by the faces. Furthermore at subsequent recognition memory testing, DP did not exhibit MCM bias for sad faces. During experiment four, DP and HC were presented with videotaped depictions of 'realistic' social interactions and were asked to identify the emotion portrayed by the characters and to make inferences about the thoughts, intentions and beliefs of these individuals. Overall, DP were impaired in their recognition of happiness and in understanding social interactions involving sarcasm and deception. Correct social inference was significantly related to both executive function and depression severity. Experiment five involved assessing a group of eight patients that had undergone neurosurgery for chronic, treatment-refractory depression on the identical emotion recognition and social perception tasks that were utilised in experiment four. Relative to HC, surgery patients (SP) exhibited general deficits on all emotion recognition and social processing tasks. Notably, depression status did not appear to interact with surgery status to worsen these observed deficits. These findings suggest that the anterior cingulate region of the prefrontal cortex may play a role in correct social inference. Summary: Taken together the findings of the five experimental studies of the thesis demonstrate that, in general, biases that have been observed in DP processing of affective verbal material generalise to non-verbal emotional material (e.g. emotional faces). However, there are a number of marked differences that have been highlighted throughout the thesis. There is also evidence that biased emotional processing in DP requires explicit processing of the emotional content of the stimuli. Furthermore, a central theme of the thesis is that deficits in executive function in DP appear to be implicated in the impairments of emotional processing that are exhibited by these patients.
|
840 |
Vers la compréhension du traitement dynamique du visage humain / Moving towards the understanding of dynamic human face processingRichoz, Anne-Raphaëlle 12 January 2018 (has links)
Au cours des dernières décennies, la plupart des études investiguant la reconnaissance des visages ont utilisé des photographies statiques. Or dans notre environnement naturel, les visages auxquels nous sommes exposés sont des phénomènes dynamiques qu’il est difficile de communiquer écologiquement avec des images statiques. Cette exposition quotidienne et répétée à des visages en mouvement pourrait-elle avoir un effet sur notre système visuel, favorisant le traitement de stimuli dynamiques au détriment des statiques ?Afin d’éclairer cette problématique, les recherches présentées dans cette thèse avaient pour but d’utiliser des stimuli dynamiques pour étudier différents aspects du traitement des visages à travers plusieurs groupes d’âge et populations. Dans notre première recherche, nous avons utilisé des visages animés pour voir si la capacité de nourrissons âgés de 6-, 9- et 12 mois à associer des attributs audibles et visibles à un genre est influencée par l'utilisation d’un discours de type adulte par opposition à un langage de type enfantin. Nos résultats ont montré qu’à partir de 6 mois, lorsqu'ils étaient soumis à un discours de type adulte, les nourrissons associaient les voix et visages de femmes. Par contre, lorsqu'ils étaient confrontés à un langage de type enfantin, cette capacité apparaissait seulement à l'âge de 9 mois. Ces premiers résultats soutiennent l'idée selon laquelle le développement de la perception multisensorielle chez les nourrissons est influencé par la nature même des interactions sociales.Dans notre deuxième recherche, nous avons utilisé une nouvelle technique 4D pour reconstruire les représentations mentales des six émotions de base d’une patiente présentant un cas unique et pure de prosopagnosie acquise (i.e., une incapacité à reconnaître les visages), afin de réexaminer une question bien débattue, à savoir si les modules cérébraux sous-jacents à la reconnaissance de l’identité et des expressions faciales sont séparés ou communs. Les résultats ont montré que notre patiente a utilisé toutes les caractéristiques faciales pour identifier les émotions de base, ce qui contraste fortement avec son utilisation déficitaire de l'information faciale pour la reconnaissance de l’identité. Ces résultats confortent l’idée selon laquelle différents systèmes de représentations sous-tendent le traitement de l'identité et de l'expression. Par la suite, nous avons pu démontrer que notre patiente était capable de reconnaître adéquatement les expressions émotionnelles dynamiques, mais pas les émotions statiques provenant de ses propres représentations internes. Ces résultats qui pourraient être expliqués par un ensemble spécifique de lésions dans son gyrus occipital inférieur droit, soutiennent l’idée selon laquelle le traitement des stimuli statiques et dynamiques se produit dans des régions cérébrales différentes.Dans notre troisième recherche, nous avons investigué si d'autres populations ayant un système visuel neurologiquement fragile ou en développement bénéficient également de la présentation d’expressions dynamiques. Nous avons demandé à plus de 400 sujets âgés de 5 à 96 ans de catégoriser les six expressions de base en versions statique, dynamique ou bruitée. En utilisant un modèle Bayésien, nos résultats nous ont permis de quantifier la pente d'amélioration et de déclin pour chaque expression dans chaque condition, ainsi que d'estimer l'âge auquel l’efficacité est maximale. En résumé, nos résultats montrent la supériorité des stimuli dynamiques dans la reconnaissance des expressions faciales, de manière plus marquée pour certaines expressions que d'autres et de façon plus importante à certains moments spécifiques du développement.Dans l'ensemble, les résultats de cette thèse soulignent l'importance d’investiguer la reconnaissance des visages avec des stimuli dynamiques, non seulement en neuropsychologie, mais aussi dans d'autres domaines des neurosciences développementales et cliniques. / The human visual system is steadily stimulated by dynamic cues. Faces provide crucial information important for adapted social interactions. From an evolutionary perspective, humans have been more extensively exposed to dynamic faces, as static face images have only appeared recently with the advent of photography and the expansion of digital tools. Yet, most studies investigating face perception have relied on static faces and only a little is known about the mechanisms involved in dynamic face processing.To clarify this issue, this thesis aimed to use dynamic faces to investigate different aspects of face processing in different populations and age groups. In Study 1, we used dynamic faces to investigate whether the ability of infants aged 6, 9, and 12 months in matching audible and visible attributes of gender is influenced by the use of adult-directed (ADS) vs. infant-directed (IDS) speech. Our results revealed that from 6 months of age, infants matched female faces and voices when presented with ADS. This ability emerged at 9 months of age when presented with IDS. Altogether, these findings support the idea that the perception of multisensory gender coherence is influenced by the nature of social interactions.In Study 2, we used a novel 4D technique to reconstruct the dynamic internal representations of the six basic expressions in a pure case of acquired prosopagnosia (i.e., a brain-damaged patient severely impaired in recognizing familiar faces). This was done in order to re-examine the debated issue of whether identity and expression are processed independently. Our results revealed that our patient used all facial features to represent basic expressions, contrasting sharply with her suboptimal use of facial information for identity recognition. These findings support the idea that different sets of representations underlie the processing of identity and expression. We then examined our patient’s ability to recognize static and dynamic expressions using her internal representations as stimuli. Our results revealed that she was selectively impaired in recognizing many of the static expressions; whereas she displayed maximum accuracy in recognizing all the dynamic emotions with the exception of fear. The latter findings support recent evidence suggesting that separate cortical pathways, originating in early visual areas and not in the inferior occipital gyrus, are responsible for the processing of static and dynamic face information.Moving on from our second study, in Study 3, we investigated whether dynamic cues offer processing benefits for the recognition of facial expressions in other populations with immature or fragile face processing systems. To this aim, we conducted a large sample cross-sectional study with more than 400 participants aged between 5 to 96 years, investigating their ability to recognize the six basic expressions presented under different temporal conditions. Consistent with previous studies, our findings revealed the highest recognition performance for happiness, regardless of age and experimental condition, as well as marked confusions among expressions with perceptually similar facial signals (e.g., fear and surprise). By using Bayesian modelling, our results further enabled us to quantify, for each expression and condition individually, the steepness of increase and decrease in recognition performance, as well as the peak efficiency, the point at which observers’ performance reaches its maximum before declining. Finally, our results offered new evidence for a dynamic advantage for facial expression recognition, stronger for some expressions than others and more important at specific points in development.Overall, the results highlighted in this thesis underlie the critical importance of research featuring dynamic stimuli in face perception and expression recognition studies; not only in the field of prosopagnosia, but also in other domains of developmental and clinical neuroscience.
|
Page generated in 0.0419 seconds