Spelling suggestions: "subject:"game patterns"" "subject:"gave patterns""
1 |
Selective attention to face cues in adults with and without autism spectrum disordersRigby, Sarah Nugent 01 September 2015 (has links)
Individuals with autism spectrum disorders (ASD) use atypical approaches when processing facial stimuli. The first purpose of this research was to investigate face processing abilities in adults with ASD using several tasks, to compare patterns of interference between static identity and expression processing in adults with ASD and typical adults, and to investigate whether the introduction of dynamic cues caused members of one or both groups to shift from a global to a more local processing strategy. The second purpose was to compare the gaze behaviour of groups of participants as they viewed static and dynamic single- and multiple-character scenes. I tested 16 adults with ASD and 16 sex-, age-, and IQ-matched typical controls. In Study 1, participants completed a task designed to assess processing speed, another to measure visual processing bias, and two tasks involving static and dynamic face stimuli -- an identity-matching task and a Garner selective attention task. Adults with ASD were less sensitive to facial identity, and, unlike typical controls, showed negligible interference between identity and expression processing when judging both static and moving faces. In Study 2, participants viewed scenes while their gaze behaviour was recorded. Overall, participants with ASD showed fewer and shorter fixations on faces compared to their peers. Additionally, whereas the introduction of motion and increased social complexity of the scenes affected the gaze behaviour of typical adults, only the latter manipulation affected adults with ASD. My findings emphasize the importance of using dynamic displays when studying typical and atypical face processing mechanisms. / October 2015
|
2 |
VISUAL SALIENCY ANALYSIS, PREDICTION, AND VISUALIZATION: A DEEP LEARNING PERSPECTIVEMahdi, Ali Majeed 01 August 2019 (has links) (PDF)
In the recent years, a huge success has been accomplished in prediction of human eye fixations. Several studies employed deep learning to achieve high accuracy of prediction of human eye fixations. These studies rely on pre-trained deep learning for object classification. They exploit deep learning either as a transfer-learning problem, or the weights of the pre-trained network as the initialization to learn a saliency model. The utilization of such pre-trained neural networks is due to the relatively small datasets of human fixations available to train a deep learning model. Another relatively less prioritized problem is amount of computation of such deep learning models requires expensive hardware. In this dissertation, two approaches are proposed to tackle abovementioned problems. The first approach, codenamed DeepFeat, incorporates the deep features of convolutional neural networks pre-trained for object and scene classifications. This approach is the first approach that uses deep features without further learning. Performance of the DeepFeat model is extensively evaluated over a variety of datasets using a variety of implementations. The second approach is a deep learning saliency model, codenamed ClassNet. Two main differences separate the ClassNet from other deep learning saliency models. The ClassNet model is the only deep learning saliency model that learns its weights from scratch. In addition, the ClassNet saliency model treats prediction of human fixation as a classification problem, while other deep learning saliency models treat the human fixation prediction as a regression problem or as a classification of a regression problem.
|
Page generated in 0.053 seconds