• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Facial motion perception in autism spectrum disorder and neurotypical controls

Girges, Christine January 2015 (has links)
Facial motion provides an abundance of information necessary for mediating social communication. Emotional expressions, head rotations and eye-gaze patterns allow us to extract categorical and qualitative information from others (Blake & Shiffrar, 2007). Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterised by a severe impairment in social cognition. One of the causes may be related to a fundamental deficit in perceiving human movement (Herrington et al., 2007). This hypothesis was investigated more closely within the current thesis. In neurotypical controls, the visual processing of facial motion was analysed via EEG alpha waves. Participants were tested on their ability to discriminate between successive animations (exhibiting rigid and nonrigid motion). The appearance of the stimuli remained constant over trials, meaning decisions were based solely on differential movement patterns. The parieto-occipital region was specifically selective to upright facial motion while the occipital cortex responded similarly to natural and manipulated faces. Over both regions, a distinct pattern of activity in response to upright faces was characterised by a transient decrease and subsequent increase in neural processing (Girges et al., 2014). These results were further supported by an fMRI study which showed sensitivity of the superior temporal sulcus (STS) to perceived facial movements relative to inanimate and animate stimuli. The ability to process information from dynamic faces was assessed in ASD. Participants were asked to recognise different sequences, unfamiliar identities and genders from facial motion captures. Stimuli were presented upright and inverted in order to assess configural processing. Relative to the controls, participants with ASD were significantly impaired on all three tasks and failed to show an inversion effect (O'Brien et al., 2014). Functional neuroimaging revealed atypical activities in the visual cortex, STS and fronto-parietal regions thought to contain mirror neurons in participants with ASD. These results point to a deficit in the visual processing of facial motion, which in turn may partly cause social communicative impairments in ASD.
2

Using two- and three-dimensional kinematic analysis to compare functional outcomes in patients who have undergone facial reanimation surgery

Dunwald, Lisa Unknown Date
No description available.
3

Using two- and three-dimensional kinematic analysis to compare functional outcomes in patients who have undergone facial reanimation surgery

Dunwald, Lisa 11 1900 (has links)
The current study was designed to: (1) compare the sensitivity of a 2-dimensional video-based system with a 3-dimensional optical system, and (2) investigate movement on the affected and unaffected side of the face during the production of various functional movement tasks in 5 patients who had undergone facial reanimation surgery. The study showed that: (1) distance is the most valuable measure for evaluating facial paralysis, regardless of system; (2) movements associated with maximal contraction and running speech tasks are most informative when assessing facial paralysis; (3) area and volume ratios may be an appropriate measure for tracking changes in facial movement over time; (4) velocity and acceleration measures provide minimal information regarding facial movement; and (5) 2-dimensional analysis is most effective when distance is measured during maximal contraction and running speech tasks. Both systems were effective in tracking small movements of the face, but the 3-dimensional system was superior overall.
4

MPEG-4 Facial Feature Point Editor / Editor för MPEG-4 "feature points"

Lundberg, Jonas January 2002 (has links)
The use of computer animated interactive faces in film, TV, games is ever growing, with new application areas emerging also on the Internet and mobile environments. Morph targets are one of the most popular methods to animate the face. Up until now 3D artists had to design each morph target defined by the MPEG-4 standard by hand. This is a very monotonous and tedious task. With the newly developed method of Facial Motion Cloning [11]the heavy work is relieved from the artists. From an already animated face model the morph targets can now be copied onto a new static face model. For the Facial Motion Cloning process there must be a subset of the feature points specified by the MPEG-4 standard defined. The purpose of this is to correlate the facial features of the two faces. The goal of this project is to develop a graphical editor in which the artists can define the feature points for a face model. The feature points will be saved in a file format that can be used in a Facial Motion Cloning software.
5

MPEG-4 Facial Feature Point Editor / Editor för MPEG-4 "feature points"

Lundberg, Jonas January 2002 (has links)
<p>The use of computer animated interactive faces in film, TV, games is ever growing, with new application areas emerging also on the Internet and mobile environments. Morph targets are one of the most popular methods to animate the face. Up until now 3D artists had to design each morph target defined by the MPEG-4 standard by hand. This is a very monotonous and tedious task. With the newly developed method of Facial Motion Cloning [11]the heavy work is relieved from the artists. From an already animated face model the morph targets can now be copied onto a new static face model. </p><p>For the Facial Motion Cloning process there must be a subset of the feature points specified by the MPEG-4 standard defined. The purpose of this is to correlate the facial features of the two faces. The goal of this project is to develop a graphical editor in which the artists can define the feature points for a face model. The feature points will be saved in a file format that can be used in a Facial Motion Cloning software.</p>

Page generated in 0.0537 seconds