Spelling suggestions: "subject:"[een] EYE-TRACKING"" "subject:"[enn] EYE-TRACKING""
311 |
Factors Affecting Adult Mental Rotation PerformanceNazareth, Alina 22 June 2015 (has links)
Research on mental rotation has consistently found sex differences, with males outperforming females on mental rotation tasks like the Vandenberg and Kuse (1978) mental rotation test (MRT; D. Voyer, Voyer, & Bryden, 1995). Mental rotation ability has been found to be enhanced with experience (Nazareth, Herrera & Pruden, 2013) and training (Wright, Thompson, Ganis, Newcombe, & Kosslyn, 2008) and the effects of training have been found to be transferable to other spatial tasks (Wright et al., 2008) and sustainable for months (Terlecki, Newcombe, & Little, 2008). Although, we now are fairly certain about the malleability of spatial tasks and the role of spatial activity experience, we seem to have undervalued an important piece of the puzzle. What is the mechanism by which experiential factors enhance mental rotation performance? In other words, what is it that develops in an individual as a consequence of experience? The current dissertation sought to address this gap in the literature by examining cognitive strategy selection as a possible mechanism by which experiential factors like early spatial activity experience enhance mental rotation performance. A total of 387 adult university students were randomly assigned to one of three experimental conditions. The three experimental conditions differed in the amount and type of non-spatial information present in the task stimuli. Participant eye movement was recorded using a Tobii X60 eye tracker. Study I investigated the different types of cognitive strategies selected during mental rotation, where eye movement patterns were used as indicators of the underlying cognitive strategies. A latent profile analysis revealed two distinct eye movement patterns significantly predicting mental rotation performance. Study II examined the role of early spatial activity experience in mental rotation performance. Male sex-typed spatial activities were found to significantly mediate the relation between participant sex and mental rotation performance. Finally, Study III examined the developmental role of early spatial activity experience in cognitive strategy selection and strategy flexibility to enhance mental rotation performance. Strategy flexibility was found to be significantly associated with mental rotation performance. Male sex-typed spatial activity experiences were found to be significantly associated with cognitive strategy selection but not strategy flexibility. Implications for spatial training and educational pedagogy in the STEM fields are discussed.
|
312 |
Traitement des visages par les jeunes enfants avec un TSA : études en suivi du regard / Face processing in young children with ASD : an eye-tracking perspectiveGuillon, Quentin 18 November 2014 (has links)
De par la richesse et la nature des informations qu’il véhicule, le visage joue un rôle essentiel dans les interactions sociales. Les difficultés que manifestent dès le plus jeune âge les personnes présentant un Trouble du Spectre de l’Autisme (TSA) sur le plan de l’interaction sociale ont conduit à s’intéresser aux modalités de traitement du visage dans cette population. Les travaux de cette thèse ont pour but d’explorer le traitement du visage chez les jeunes enfants avec un TSA, âgés de 24 à 60 mois, au moyen de la technique de suivi du regard. Dans la première étude, nous montrons que les jeunes enfants avec un TSA, comme les enfants typiques, sont sensibles à la visagéité d’un objet, ce qui suggère un traitement de la configuration de premier ordre. Ce résultat suggère que la nature des représentations faciales dans les TSA n’est pas qualitativement différente de celle des personnes typiques. Dans la seconde étude, nous testons la présence d’un biais du regard vers l’hémichamp visuel gauche en réponse à un visage présenté en vision centrale. Les résultats de cette étude indiquent que les jeunes enfants avec un TSA ne présentent pas ce biais du regard, ce qui pourrait refléter une altération de la dominance hémisphérique droite pour le traitement du visage dans les TSA. Enfin, dans la troisième étude, l’analyse du parcours visuel des jeunes enfants avec un TSA sur les visages révèle une exploration atypique concentrée au niveau de la région des yeux. Dans l’ensemble, ces études suggèrent que même si les enfants avec un TSA traitent les visages à partir de leur configuration, la manière d’y parvenir pourrait être différente. Des études futures devront spécifier les mécanismes du traitement configural du visage dans les TSA. / Faces are important for social interactions as they convey important information about social environment. Impairment in social interactions is one of the core symptoms of Autism Spectrum Disorders (ASD) and has been related to atypical face processing. Here, we investigated face processing in preschool children with ASD using eye-tracking methodology. In the first study, we showed that young children with ASD, just like typically developing children are sensitive to face-like objects suggesting that processing first order configuration is intact in ASD. According to these results, the nature of facial representation might be qualitatively similar between groups. In the second study, we tested the presence of a left gaze bias in response to faces presented at central vision. A lack of left gaze bias was found in young children with ASD, reflecting atypical right hemispheric lateralization for face processing. Finally, the third study analyzed the visual scanning of static faces and showed an abnormal exploration pattern limited to the eyes. Overall, these studies argue for the presence of configural face processing in preschoolers with ASD despite differences in strategy from typically developing children. Futures studies will have to specify the mechanisms underlying atypical configural face processing in ASD.
|
313 |
Avaliação de desempenho de algoritmos de estimação do olhar para interação com computadores vestíveis / Performance evaluation of eye tracking algorithms for wearable computer interactionFernando Omar Aluani 08 December 2017 (has links)
Cada vez mais o rastreamento do olhar tem sido usado para interação humano-computador em diversos cenários, como forma de interação (usualmente substituindo o mouse, principalmente para pessoas com deficiências físicas) ou estudo dos padrões de atenção de uma pessoa (em situações como fazendo compras no mercado, olhando uma página na internet ou dirigindo um carro). Ao mesmo tempo, dispositivos vestíveis tais quais pequenas telas montadas na cabeça e sensores para medir dados relativos à saúde e exercício físico realizado por um usuário, também têm avançado muito nos últimos anos, finalmente chegando a se tornarem acessíveis aos consumidores. Essa forma de tecnologia se caracteriza por dispositivos que o usuário usa junto de seu corpo, como uma peça de roupa ou acessório. O dispositivo e o usuário estão em constante interação e tais sistemas são feitos para melhorar a execução de uma ação pelo usuário (por exemplo dando informações sobre a ação em questão) ou facilitar a execução de várias tarefas concorrentemente. O uso de rastreadores de olhar em computação vestível permite uma nova forma de interação para tais dispositivos, possibilitando que o usuário interaja com eles enquanto usa as mãos para realizar outra ação. Em dispositivos vestíveis, o consumo de energia é um fator importante do sistema que afeta sua utilidade e deve ser considerado em seu design. Infelizmente, rastreadores oculares atuais ignoram seu consumo e focam-se principalmente em precisão e acurácia, seguindo a ideia de que trabalhar com imagens de alta resolução e frequência maior implica em melhor desempenho. Porém tratar mais quadros por segundo ou imagens com resolução maior demandam mais poder de processamento do computador, consequentemente aumentando o gasto energético. Um dispositivo que seja mais econômico tem vários benefícios, por exemplo menor geração de calor e maior vida útil de seus componentes eletrônicos. Contudo, o maior impacto é o aumento da duração da bateria para dispositivos vestíveis. Pode-se economizar energia diminuindo resolução e frequência da câmera usada, mas os efeitos desses parâmetros na precisão e acurácia da estimação do olhar não foram investigados até o presente. Neste trabalho propomos criar uma plataforma de testes, que permita a integração de alguns algoritmos de rastreamento de olhar disponíveis, tais como Starburst, ITU Gaze Tracker e Pupil, para estudar e comparar o impacto da variação de resolução e frequência na acurácia e precisão dos algoritmos. Por meio de um experimento com usuários analisamos o desempenho e consumo desses algoritmos sob diversos valores de resolução e frequência. Nossos resultados indicam que apenas a diminuição da resolução de 480 para 240 linhas (mantendo a proporção da imagem) já acarreta em ao menos 66% de economia de energia em alguns rastreadores sem perda significativa de acurácia. / Eye tracking has been used more and more in human-computer interaction in several scenarios, as a form of interaction (mainly replacing the mouse for the physically handicapped) or as a means to study attention patterns of a person (performing activities such as grocery shopping, reading web pages or driving a car). At the same time, wearable devices such as small head-mounted screens and health-related sensors, have improved considerably in these years, finally becoming accessible to mainstream consumers. This form of technology is defined by devices that an user uses alongside his body, like a piece of clothing or accessory. The device and the user are in constant interaction and such systems are usually made to improve the user\'s ability to execute a task (for example, by giving contextualized information about the task in question) or to facilitate the parallel execution of several tasks. The use of eye trackers in wearable computing allows a new form of interaction in these devices, allowing the user to interact with them while performing another action with his hands. In wearable devices, the energy consumption is an important factor of the system which affects its utility and must be considered in its design. Unfortunately, current eye trackers ignore energy consumption and instead mainly focus on precision and accuracy, following the idea that working with higher resolution and higher frequency images will improve performance. However, processing more frames, or larger frames, per second require more computing power, consequentially increasing energy expense. A device that is more economical has several benefits, such as less heat generation and a greater life-span of its components. Yet the greatest impact is the increased battery duration for the wearable devices. Energy can be saved by lowering the frequency and resolution of the camera used by the tracker, but the effect of these parameters in the precision and accuracy of eye tracking have not been researched until now. In this work we propose an eye tracking testing platform, that allows integration with existing eye tracking algorithms such as Starburst, ITU Gaze Tracker and Pupil, to study and compare the impact of varying the resolution and frequency of the camera on accuracy and precision of the algorithms. Through a user experiment we analyzed the performance and consumption of these algorithms under various resolution and frequency values. Our result indicate that only lowering the resolution from 480 to 240 lines (keeping the image aspect ratio) already amounts to a 66% energy economy in some trackers without any meaningful loss of accuracy.
|
314 |
EyeSwipe: text entry using gaze paths / EyeSwipe: entrada de texto usando gestos do olharAndrew Toshiaki Nakayama Kurauchi 30 January 2018 (has links)
People with severe motor disabilities may communicate using their eye movements aided by a virtual keyboard and an eye tracker. Text entry by gaze may also benefit users immersed in virtual or augmented realities, when they do not have access to a physical keyboard or touchscreen. Thus, both users with and without disabilities may take advantage of the ability to enter text by gaze. However, methods for text entry by gaze are typically slow and uncomfortable. In this thesis we propose EyeSwipe as a step further towards fast and comfortable text entry by gaze. EyeSwipe maps gaze paths into words, similarly to how finger traces are used on swipe-based methods for touchscreen devices. A gaze path differs from the finger trace in that it does not have clear start and end positions. To segment the gaze path from the user\'s continuous gaze data stream, EyeSwipe requires the user to explicitly indicate its beginning and end. The user can quickly glance at the vicinity of the other characters that compose the word. Candidate words are sorted based on the gaze path and presented to the user. We discuss two versions of EyeSwipe. EyeSwipe 1 uses a deterministic gaze gesture called Reverse Crossing to select both the first and last letters of the word. Considering the lessons learned during the development and test of EyeSwipe 1 we proposed EyeSwipe 2. The user emits commands to the interface by switching the focus between regions. In a text entry experiment comparing EyeSwipe 2 to EyeSwipe 1, 11 participants achieved an average text entry rate of 12.58 words per minute (wpm) with EyeSwipe 1 and 14.59 wpm with EyeSwipe 2 after using each method for 75 minutes. The maximum entry rates achieved with EyeSwipe 1 and EyeSwipe 2 were, respectively, 21.27 wpm and 32.96 wpm. Participants considered EyeSwipe 2 to be more comfortable and faster, while less accurate than EyeSwipe 1. Additionally, with EyeSwipe 2 we proposed the use of gaze path data to dynamically adjust the gaze estimation. Using data from the experiment we show that gaze paths can be used to dynamically improve gaze estimation during the interaction. / Pessoas com deficiências motoras severas podem se comunicar usando movimentos do olhar com o auxílio de um teclado virtual e um rastreador de olhar. A entrada de texto usando o olhar também beneficia usuários imersos em realidade virtual ou realidade aumentada, quando não possuem acesso a um teclado físico ou tela sensível ao toque. Assim, tanto usuários com e sem deficiência podem se beneficiar da possibilidade de entrar texto usando o olhar. Entretanto, métodos para entrada de texto com o olhar são tipicamente lentos e desconfortáveis. Nesta tese propomos o EyeSwipe como mais um passo em direção à entrada rápida e confortável de texto com o olhar. O EyeSwipe mapeia gestos do olhar em palavras, de maneira similar a como os movimentos do dedo em uma tela sensível ao toque são utilizados em métodos baseados em gestos (swipe). Um gesto do olhar difere de um gesto com os dedos em que ele não possui posições de início e fim claramente definidas. Para segmentar o gesto do olhar a partir do fluxo contínuo de dados do olhar, o EyeSwipe requer que o usuário indique explicitamente seu início e fim. O usuário pode olhar rapidamente a vizinhança dos outros caracteres que compõe a palavra. Palavras candidatas são ordenadas baseadas no gesto do olhar e apresentadas ao usuário. Discutimos duas versões do EyeSwipe. O EyeSwipe 1 usa um gesto do olhar determinístico chamado Cruzamento Reverso para selecionar tanto a primeira quanto a última letra da palavra. Levando em consideração os aprendizados obtidos durante o desenvolvimento e teste do EyeSwipe 1 nós propusemos o EyeSwipe 2. O usuário emite comandos para a interface ao trocar o foco entre as regiões do teclado. Em um experimento de entrada de texto comparando o EyeSwipe 2 com o EyeSwipe 1, 11 participantes atingiram uma taxa de entrada média de 12.58 palavras por minuto (ppm) usando o EyeSwipe 1 e 14.59 ppm com o EyeSwipe 2 após utilizar cada método por 75 minutos. A taxa de entrada de texto máxima alcançada com o EyeSwipe 1 e EyeSwipe 2 foram, respectivamente, 21.27 ppm e 32.96 ppm. Os participantes consideraram o EyeSwipe 2 mais confortável e rápido, mas menos preciso do que o EyeSwipe 1. Além disso, com o EyeSwipe 2 nós propusemos o uso dos dados dos gestos do olhar para ajustar a estimação do olhar dinamicamente. Utilizando dados obtidos no experimento mostramos que os gestos do olhar podem ser usados para melhorar a estimação dinamicamente durante a interação.
|
315 |
Liens entre les habiletés rythmiques et les compétences de décodage en lecture : associer les périodicités oculaires au chunking perceptif en parole lueRossier-Bisaillon, Antonin 12 1900 (has links)
Plusieurs études documentent des liens entre des habiletés dans des tâches liées au rythme et
certaines compétences de base en lecture. Par exemple, des corrélations significatives sont
observées chez des enfants et des adolescents entre des scores en décodage de la lecture et le niveau
de réussite dans des tests qui impliquent de se synchroniser à une cadence ou une séquence
rythmique. De plus, il est connu que la dyslexie s’accompagne souvent de plusieurs difficultés sur
le plan rythmique, et des études récentes en électroencéphalographie (EEG) et
magnétoencéphalographie (MEG) ont proposé que ces lacunes soient reliées à des patrons
d’activité atypiques dans les oscillations neuronales de basse fréquence du système nerveux central.
Comment expliquer que le rythme et la lecture soient ainsi liés sur le plan développemental? Dans
plusieurs études, on fait référence à la conscience phonologique comme variable médiatrice pour
comprendre cette association de façon indirecte. À la différence de ces théories, le présent mémoire
propose une explication plus directe du lien entre rythme et lecture, dans laquelle on met l’accent
sur le rôle intrinsèque du rythme pour le chunking d’information verbale en lecture à voix haute ou
silencieuse. Pour tester cette hypothèse, on a mené une expérience durant laquelle 43 participants
et participantes ont lu des textes alors que leur voix et leurs mouvements oculaires étaient
enregistrés par un appareil d’oculométrie. Les résultats montrent que le rythme implicite d’un texte
est suivi de façon stricte dans la lecture orale des participants, indépendamment d’amorces
rythmiques auditives présentées avant le texte. De plus, certaines mesures probabilistes de fixations
oculaires concordent avec ce comportement de la voix, suggérant que l’échantillonnage visuel d’un
texte puisse générer une forme de comportement rythmique séquentiel. On termine en discutant
des implications de ces résultats de recherche pour les théories de la lecture. / Numerous studies have documented links between abilities in tasks involving rhythm and some
basic reading skills. For instance, significant correlations are observed between children or
adolescent’s reading skills and their performance on tasks on motor synchronization to an auditory
cadence or rhythmic sequence. Moreover, it is known that dyslexia is often accompanied by
difficulties at the rhythmic level, and recent studies in electroencephalography (EEG) and
magnetoencephalography (MEG) have suggested that these deficits may result from atypical
patterns of activity in low-frequency neuronal oscillations of the central nervous system.
How can these developmental links between rhythm and reading be explained? In some studies,
phonological awareness is used to understand this association, hence proposing an indirect
explanation relying on a third variable to make sense of the phenomenon. Unlike these theories,
the present work proposes a more direct explanation of the link between rhythm and reading, where
the emphasis is put on the intrinsic role of rhythm for the perceptual chunking of verbal information
in oral or silent reading. To test this hypothesis, an experiment is presented in which 43 participants
had to read sentences orally while their voice and eye-movements were recorded by an eye-tracking
system. Results show that the implicit rhythm of the text is strictly followed in the participants’
oral reading, independently of a rhythmic priming that was presented before the text. In addition,
probabilistic measures of eye fixations resemble the voice’s rhythmic chunks, highlighting possible
rhythmic properties of visual sampling for text reading. We conclude by discussing the implications
of these results for theories of reading.
|
316 |
Exploitation de la multimodalité pour l'analyse de la saillance et l'évaluation de la qualité audiovisuelle / Exploitation of multimodality for saliency analysis and audiovisual quality assessmentSidaty, Naty 11 December 2015 (has links)
Les données audiovisuelles font partie de notre quotidien que ce soit pour des besoins professionnels ou tout simplement pour le loisir. Les quantités pléthoriques de ces données imposent un recours à la compression pour le stockage ou la diffusion, ce qui peut altérer la qualité audio-visuelle si les aspects perceptuels ne sont pas pris en compte. L’état de l’art sur la saillance et la qualité est très riche, ignorant souvent l’existence de la composante audio qui a un rôle important dans le parcours visuel et la qualité de l’expérience. Cette thèse a pour objectif de contribuer à combler le manque d’approches multimodales et ce, en suivant une démarche expérimentale dédiée. Les travaux associés se déclinent en deux parties : l’attention audiovisuelle et la qualité multimodale. Tout d'abord, afin de comprendre et d’analyser l’influence de l’audio sur les mouvements oculaires humains, nous avons mené une expérimentation oculométriques impliquant un panel d’observateurs, et exploitant une base de vidéos construite pour ce contexte. L'importance des visages a ainsi été confortée mais en particulier pour les visages parlants qui ont une saillance accrue. Sur la base de ces résultats, nous avons proposé un modèle de saillance audiovisuelle basé sur la détection des locuteurs dans la vidéo et exploitant les informations de bas niveau spatiales et temporelles. Par la suite, nous avons étudié l’influence de l’audio sur la qualité multimodale et multi-supports. A cette fin, des campagnes d’évaluations psychovisuelles ont été menées dans l’optique de quantifier la qualité multimodale pour des applications de streaming vidéo où différents dispositifs de visualisation sont utilisés. / Audiovisual information are part of our daily life either for professional needs or simply for leisure purposes. The plethoric quantity of data requires the use of compression for both storage and transmission, which may alter the audiovisual quality if it does not account for perceptual aspects. The literature on saliency and quality is very rich, often ignoring the audio component playing an important role in the visual scanpath and the quality of experience. This thesis aims at contributing in overing the lack of multimodal approaches, by following a dedicated experimental procedures. The proposed work is twofold: visual attention modelling and multimodal quality evaluation. First, in order to better understand and analyze the influence of audio on humain ocular movements, we run several eyetracking experiments involving a panel of observers and exploiting a video dataset constructed for our context. The importance of faces has been confirmed, particularly for talking faces having an increased saliency. Following these results, we proposed an audiovisual saliency model based on locutors detection in video and relying on spatial and temporal low-level features. Afterward, the influence of audio on multi-modal and multi-devices quality has been studied. To this end, psychovisual experiments have been conducted with the aim to quantify the multimodal quality in the context of video streaming applications where various display devices could be used.
|
317 |
Prediction in aging language processingCheimariou, Spyridoula 01 May 2016 (has links)
This thesis explores how predictions about upcoming linguistic stimuli are generated during real-time language comprehension in younger and older adults. Previous research has shown humans' ability to use rich contextual information to compute linguistic prediction during real-time language comprehension. Research in the modulating factors of prediction has shown, first, that predictions are informed by our experience with language and second, that these predictions are modulated by cognitive factors such as working memory and processing speed. However, little is known about how these factors interact in aging in which verbal intelligence remains stable or even increases, whereas processing speed, working memory, and inhibitory control decline with age. Experience-driven models of language learning argue that learning occurs across the life span instead of terminating once representations are learned well enough to approximate a stable state. In relation to aging, these models predict that older adults are likely to possess stronger learned associations, such that the predictions they generate during on-line processing may be stronger. At the same time, however, processing speed, working memory, and inhibitory control decline as a function of age, and age-related declines in these processes may reduce the degree to which older adults can predict. Here, I explored the interplay between language and cognitive factors in the generation of predictions and hypothesized that older adults will show stronger predictability effects than younger adults likely because of their language experience. In this thesis, I provide evidence from reading eye-movements, event-related potentials (ERPs), and EEG phase synchronization, for the role of language experience and cognitive decline in prediction in younger and older English speakers. I demonstrated that the eye-movement record is influenced by linguistic factors, which produce greater predictability effects as linguistic experience advances, and cognitive factors, which produce smaller predictability effects as they decline. Similarly, the N400, an ERP response that is modulated by a word's predictability, was also moderated by cognitive factors. Most importantly, older adults were able to use context efficiently to facilitate upcoming words in the ERP study, contrary to younger adults. Further, I provide initial evidence that coherence analysis may be used as a measure of cognitive effort to illustrate the facilitation that prediction confers to language comprehenders. The results indicate that for a comprehensive account of predictive processing research needs to take into account the role of experience acquired through lifetime and the declines that aging brings.
|
318 |
Machine Learning Classification of Facial Affect Recognition Deficits after Traumatic Brain Injury for Informing Rehabilitation Needs and ProgressSyeda Iffat Naz (9746081) 07 January 2021 (has links)
A common impairment after a traumatic brain injury (TBI) is a deficit in emotional recognition, such as inferences of others’ intentions. Some researchers have found these impairments in 39\% of the TBI population. Our research information needed to make inferences about emotions and mental states comes from visually presented, nonverbal cues (e.g., facial expressions or gestures). Theory of mind (ToM) deficits after TBI are partially explained by impaired visual attention and the processing of these important cues. This research found that patients with deficits in visual processing differ from healthy controls (HCs). Furthermore, we found visual processing problems can be determined by looking at the eye tracking data developed from industry standard eye tracking hardware and software. We predicted that the eye tracking data of the overall population is correlated to the TASIT test. The visual processing of impaired (who got at least one answer wrong from TASIT questions) and unimpaired (who got all answer correctly from TASIT questions) differs significantly. We have divided the eye-tracking data into 3 second time blocks of time series data to detect the most salient individual blocks to the TASIT score. Our preliminary results suggest that we can predict the whole population's impairment using eye-tracking data with an improved f1 score from 0.54 to 0.73. For this, we developed optimized support vector machine (SVM) and random forest (RF) classifier.
|
319 |
Autism, Alexithymia, and Anxious Apprehension: A Multimethod Investigation of Eye FixationStephenson, Kevin G. 01 July 2018 (has links)
Reduced eye fixation and deficits in emotion identification accuracy have been commonly reported in individuals with autism spectrum disorder (AS), but are not ubiquitous. There is growing evidence that emotion processing deficits may be better accounted for by comorbid alexithymia (i.e., difficulty understanding and describing one's emotional state), rather than AS symptoms per se. Another possible explanation is anxiety, which is often comorbid with AS; emotion processing difficulties, including attentional biases, have also been observed in anxiety disorders, suggesting that anxiety symptoms may also influence emotion processing within AS. The purpose of the current study was to test the role of dimensional symptoms of autism, anxious apprehension (AA), and alexithymia in mediating eye fixation across two different facial processing tasks with three adult samples: adults diagnosed with autism (AS; n = 30), adults with clinically-elevated anxiety without autism (HI-ANX; n = 29), and neurotypical adults without high anxiety (NT; n = 46). Experiment 1 involved participants completing an emotion identification task involving short video clips. Experiment 2 was a luminescence change detection task with an emotional-expression photo paired with a neutral-expression photo. Joy, anger, and fear video and photo stimuli were used. Dimensional, mixed-effects models showed that symptoms of autism, but not alexithymia, predicted lower eye fixation across two separate face processing tasks. There were no group differences or significant dimensional effects for accuracy. Anxious apprehension was negatively related to response time in Experiment 1 and positively related to eye fixation in Experiment 2. An attentional avoidance of negative emotions was observed in the NT and HI-ANX group, but not the AS group. The bias was most pronounced at lower levels of AS symptoms and higher levels of AA symptoms. The results provide some evidence for a possible anxiety-related subtype in AS, with participants endorsing high autism symptoms, but low anxious apprehension, demonstrating more classic emotion processing deficits of reduced eye fixation.
|
320 |
The Effects of Native Advertising Disclosure and Advertising Recognition on Perceptions of News Story and News Website Credibility: A Consumer Neuroscience ApproachMule, Jessica Loko 14 September 2021 (has links)
The use of Native Advertising has sparked ethical concerns, due to its controversial nature inherent in its definition - a paid form of advertising that disguises persuasive communications as the editorial content of the publishing media outlet. The growing popularity of Native Advertising practices over the past decade in online news publishing has contributed towards the increasingly blurred lines between commercial and editorial content which in turn engenders feelings of deception in consumers and threatens to lower the trustworthiness of news publishers as an objective source. Therefore, the purpose of this study was to undertake theory testing guided by the tenets of the Persuasion Knowledge Model [PKM] (Friestad & Wright, 1994) to uncover insights on whether disclosure serves as an effective measure in publishers' efforts of mitigating the potential of consumer deception. In particular, this study investigated the relationships between: (1) effect of disclosure label positioning on advertising recognition; (2) mediating influence of visual attention on the aforementioned relationship; and (3) effect of advertising recognition on Inference of Manipulation [IMI] and perceptions of the online news publishers' credibility. The study used a quantitative multi-methodology research approach. An innovative Neuromarketing approach was undertaken through a psychophysiological-based analysis of visual attention to disclosure, measured as Fixation (ms/m) using eye-tracking technology, in addition to self-reported measures obtained via an online survey. In line with similar past studies, this study used convenience non-probability sampling and random assignment of participants to experimental groups, on a sample of 87 students between the ages of 20-29 years from the University of Cape Town (UCT). Findings showed no significant difference in the likelihood of advertising recognition, neither between the groups presented with a disclosure and those not, nor between the varying positions of disclosure. Additionally, advertising recognition had a positive influence on perceptions of credibility, contrary to theory and evidence from past studies (described in the Literature Review). Thus, it was concluded that disclosure and advertising recognition are necessary antecedents for critical processing and formation of judgement, but by themselves are not sufficient for perceived transparency and subsequent evaluations of the publisher's credibility. This study presents design implications for practitioners in the online news publishing industry and marketers: the perceived utility of the sponsored content, along with sponsorship transparency through disclosure, plays an important role in minimizing the negative influence of advertising recognition on perceived credibility.
|
Page generated in 0.0523 seconds