1 |
Facial motion perception in autism spectrum disorder and neurotypical controlsGirges, Christine January 2015 (has links)
Facial motion provides an abundance of information necessary for mediating social communication. Emotional expressions, head rotations and eye-gaze patterns allow us to extract categorical and qualitative information from others (Blake & Shiffrar, 2007). Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterised by a severe impairment in social cognition. One of the causes may be related to a fundamental deficit in perceiving human movement (Herrington et al., 2007). This hypothesis was investigated more closely within the current thesis. In neurotypical controls, the visual processing of facial motion was analysed via EEG alpha waves. Participants were tested on their ability to discriminate between successive animations (exhibiting rigid and nonrigid motion). The appearance of the stimuli remained constant over trials, meaning decisions were based solely on differential movement patterns. The parieto-occipital region was specifically selective to upright facial motion while the occipital cortex responded similarly to natural and manipulated faces. Over both regions, a distinct pattern of activity in response to upright faces was characterised by a transient decrease and subsequent increase in neural processing (Girges et al., 2014). These results were further supported by an fMRI study which showed sensitivity of the superior temporal sulcus (STS) to perceived facial movements relative to inanimate and animate stimuli. The ability to process information from dynamic faces was assessed in ASD. Participants were asked to recognise different sequences, unfamiliar identities and genders from facial motion captures. Stimuli were presented upright and inverted in order to assess configural processing. Relative to the controls, participants with ASD were significantly impaired on all three tasks and failed to show an inversion effect (O'Brien et al., 2014). Functional neuroimaging revealed atypical activities in the visual cortex, STS and fronto-parietal regions thought to contain mirror neurons in participants with ASD. These results point to a deficit in the visual processing of facial motion, which in turn may partly cause social communicative impairments in ASD.
|
2 |
Using two- and three-dimensional kinematic analysis to compare functional outcomes in patients who have undergone facial reanimation surgeryDunwald, Lisa Unknown Date
No description available.
|
3 |
Using two- and three-dimensional kinematic analysis to compare functional outcomes in patients who have undergone facial reanimation surgeryDunwald, Lisa 11 1900 (has links)
The current study was designed to: (1) compare the sensitivity of a 2-dimensional video-based system with a 3-dimensional optical system, and (2) investigate movement on the affected and unaffected side of the face during the production of various functional movement tasks in 5 patients who had undergone facial reanimation surgery. The study showed that: (1) distance is the most valuable measure for evaluating facial paralysis, regardless of system; (2) movements associated with maximal contraction and running speech tasks are most informative when assessing facial paralysis; (3) area and volume ratios may be an appropriate measure for tracking changes in facial movement over time; (4) velocity and acceleration measures provide minimal information regarding facial movement; and (5) 2-dimensional analysis is most effective when distance is measured during maximal contraction and running speech tasks. Both systems were effective in tracking small movements of the face, but the 3-dimensional system was superior overall.
|
4 |
Facial Motion Augmented Identity Verification with Deep Neural NetworksSun, Zheng 06 October 2023 (has links) (PDF)
Identity verification is ubiquitous in our daily life. By verifying the user's identity, the authorization process grants the privilege to access resources or facilities or perform certain tasks. The traditional and most prevalent authentication method is the personal identification number (PIN) or password. While these knowledge-based credentials could be lost or stolen, human biometric-based verification technologies have become popular alternatives in recent years. Nowadays, more people are used to unlocking their smartphones using their fingerprint or face instead of the conventional passcode. However, these biometric approaches have their weaknesses. For example, fingerprints could be easily fabricated, and a photo or image could spoof the face recognition system. In addition, these existing biometric-based identity verification methods could continue if the user is unaware, sleeping, or even unconscious. Therefore, an additional level of security is needed. In this dissertation, we demonstrate a novel identity verification approach, which makes the biometric authentication process more secure. Our approach requires only one regular camera to acquire a short video for computing the face and facial motion representations. It takes advantage of the advancements in computer vision and deep learning techniques. Our new deep neural network model, or facial motion encoder, can generate a representation vector for the facial motion in the video. Then the decision algorithm compares the vector to the enrolled facial motion vector to determine their similarity for identity verification. We first proved its feasibility through a keypoint-based method. After that, we built a curated dataset and proposed a novel representation learning framework for facial motions. The experimental results show that this facial motion verification approach reaches an average precision of 98.8\%, which is more than adequate for customary use. We also tested this algorithm on complex facial motions and proposed a new self-supervised pretraining approach to boost the encoder's performance. At last, we evaluated two other potential upstream tasks that could help improve the efficiency of facial motion encoding. Through these efforts, we have built a solid benchmark for facial motion representation learning, and the elaborate techniques can inspire other face analysis and video understanding research.
|
5 |
MPEG-4 Facial Feature Point Editor / Editor för MPEG-4 "feature points"Lundberg, Jonas January 2002 (has links)
The use of computer animated interactive faces in film, TV, games is ever growing, with new application areas emerging also on the Internet and mobile environments. Morph targets are one of the most popular methods to animate the face. Up until now 3D artists had to design each morph target defined by the MPEG-4 standard by hand. This is a very monotonous and tedious task. With the newly developed method of Facial Motion Cloning [11]the heavy work is relieved from the artists. From an already animated face model the morph targets can now be copied onto a new static face model. For the Facial Motion Cloning process there must be a subset of the feature points specified by the MPEG-4 standard defined. The purpose of this is to correlate the facial features of the two faces. The goal of this project is to develop a graphical editor in which the artists can define the feature points for a face model. The feature points will be saved in a file format that can be used in a Facial Motion Cloning software.
|
6 |
MPEG-4 Facial Feature Point Editor / Editor för MPEG-4 "feature points"Lundberg, Jonas January 2002 (has links)
<p>The use of computer animated interactive faces in film, TV, games is ever growing, with new application areas emerging also on the Internet and mobile environments. Morph targets are one of the most popular methods to animate the face. Up until now 3D artists had to design each morph target defined by the MPEG-4 standard by hand. This is a very monotonous and tedious task. With the newly developed method of Facial Motion Cloning [11]the heavy work is relieved from the artists. From an already animated face model the morph targets can now be copied onto a new static face model. </p><p>For the Facial Motion Cloning process there must be a subset of the feature points specified by the MPEG-4 standard defined. The purpose of this is to correlate the facial features of the two faces. The goal of this project is to develop a graphical editor in which the artists can define the feature points for a face model. The feature points will be saved in a file format that can be used in a Facial Motion Cloning software.</p>
|
Page generated in 0.0753 seconds