• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 374
  • 51
  • 40
  • 39
  • 34
  • 28
  • 19
  • 18
  • 11
  • 10
  • 9
  • 8
  • 6
  • 4
  • 3
  • Tagged with
  • 776
  • 776
  • 126
  • 109
  • 88
  • 83
  • 72
  • 70
  • 69
  • 68
  • 66
  • 63
  • 59
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Autisme, sillon temporal supérieur (STS) et perception sociale : études en imagerie cérébrale et en TMS / Autism, superior temporal sulcus (STS) and social perception : brain imaging and TMS studies

Baggio Saitovitch, Ana Riva 15 December 2014 (has links)
Les troubles du spectre autistique sont vraisemblablement liés à des altérations des circuits neuronaux au cours du développement. Des études en imagerie cérébrale ont mis en évidence des anomalies anatomo-fonctionnelles localisées notamment au niveau du sillon temporal supérieur (STS) dans l’autisme. Chez le sujet sain, le STS est impliqué dans la perception et la cognition sociale, dont les dysfonctionnements sont au coeur des symptômes autistiques. En effet, des anomalies de la perception sociale, notamment un manque de préférence par les yeux, ont été mises en évidence dans l’autisme. Dans cette thèse nous avons montré qu’il est possible de moduler l’activité neuronale du STS droit à l’aide de la stimulation magnétique transcranienne (TMS) avec un impact significatif sur la perception sociale, mesurée par l’eye-tracking. En effet, suite à une inhibition du STS, des jeunes volontaires sains regardent moins les yeux des personnages dans les scènes sociales. Par ailleurs, cette perception sociale a été corrélée au débit sanguin cérébral (DSC) au repos, mesuré en IRM avec la séquence arterial spin labelling. Ainsi, les volontaires sains qui regardaient le plus les yeux des personnages étaient ceux chez qui le DSC au repos était plus élevé au niveau des régions temporales droites. De plus, cette corrélation a été également observée chez des enfants avec autisme: les enfants qui regardaient le plus les yeux des personnages étaient ceux chez qui le DSC au repos était plus important au niveau des régions temporales droites. Enfin, les résultats préliminaires concernant l’application de la TMS chez des adultes avec autisme ouvrent des nouvelles perspectives thérapeutiques. / Autism is a pervasive developmental disorder associated with alterations of neural circuits. Neuroimaging studies in autism have revealed anatomo-fonctional abnormalities, particularly located within the superior temporal sulcus (STS). In normal subjects, STS is largely implicated in social perception and social cognition. Deficits in social cognition and particularly in social perception are the core symptoms of autism. Indeed, abnormalities of social perception have been described in adults and children with autism. These abnormalities are characterized by a lack of preference for the eyes. In this thesis, we have shown that it is possible to modulate neural activity within the right STS using a transcranial magnetic stimulation (TMS) protocol, with significant effects on social perception parameters, measured by eye-tracking during passive visualization of social scenes. Furthermore, social perception parameters were correlated with rest cerebral blood flow (CBF), measured with arterial spin labelling (ASL) MRI. We have shown that the healthy young volunteers who looked more to the eyes during passive visualization of social scenes were those who had higher rest CBF values within right temporal regions. In addition, this correlation was also observed in children with autism: children who looked more to the eyes during passive visualization of social scenes were those who had higher rest CBF values within right temporal regions. Finally, preliminary results concerning application of the TMS protocol in adults with autism open up new perspectives on innovate therapeutically strategies.
272

Análise da percepção da sinalização vertical por parte do condutor, utilizando ambientes simulados de direção: um estudo de caso na rodovia BR-116 / Analysis of the road signs perception in driving simulated environments: a case study on the BR-116 highway

Castillo Rangel, Miguel Andrés 15 May 2015 (has links)
Os simuladores de direção são ferramentas de pesquisa que permitem estudar o comportamento do condutor em diversos cenários de direção, de forma rápida, segura e econômica. Este estudo faz parte de um projeto de pesquisa que visa utilizar essas ferramentas na avaliação de projetos de sinalização, antes da sua implantação na rodovia. Em particular, o objetivo deste trabalho foi analisar como os condutores percebem a sinalização vertical dentro de um ambiente simulado de direção, apoiado no uso de um sistema de rastreio do olhar. O andamento da pesquisa abrangeu a montagem do simulador e do sistema de rastreio do olhar, a geração do ambiente simulado de direção, o experimento no simulador para medir a percepção da sinalização dentro do ambiente virtual, e por último, a análise e validação dos resultados. No experimento, 21 condutores dirigiram em um trecho de 10 quilômetros da rodovia BR-116 que possui 31 sinais de trânsito, para mensurar a distância de percepção, o número de fixações e o tempo de observação da sinalização, assim como a variação da velocidade após percepção da mesma. A percepção da sinalização dentro do ambiente virtual foi semelhante à reportada na literatura para estudos em estradas: em média, os condutores perceberam um de cada três sinais, o tempo de observação foi de 360 milissegundos, a distância de percepção foi de 100 metros e somente a percepção dos limites de velocidade foi relevante no comportamento dos condutores. Adicionalmente, obteve-se uma validade relativa entre as velocidades no simulador e as velocidades de operação medidas no trecho estudado. Nesse sentido, os resultados deste estudo sustentam a viabilidade e a validade do simulador de direção na avaliação de projetos de sinalização. Finalmente, como contribuição adicional, propuseram-se medidas para aprimorar a sinalização no trecho estudado e o realismo do simulador de direção. / Driving simulators are research tools that allow studying driver behavior on several driving scenarios, in a safely and cost-effective way. This study pertains to a research project whose goal is to use these tools in the assessment of road signage projects, before their implementation on roadway. In particular, the goal of this study was to analyze how drivers perceive road signs within a simulated driving environment, supported by an eye tracking system. The research development included the assembling of the driving simulator and the eye tracking system, the generation of the simulated environment, an experiment to measure the signaling perception within that environment, and finally, the analysis and validation of the results. In the experiment, twenty-one drivers drove over a ten-kilometer virtual segment of the BR-116 roadway, that has thirty-one traffic signs, in order to measure the number of eye fixations, the perception distance and the observation time over each sign, as well as, the speed change after its perception. The perception of the road signs within the virtual environment was similar to that reported in the literature for on-road studies: in average, the drivers perceived one-third of the traffic signs, the mean observation time was 360 milliseconds, the mean perception distance was 100 meters and only the speed limit signs perception was relevant on the drivers behavior. Furthermore, it was observed a relative validity between the driving simulator speeds and the actual operating speeds in the studied segment. In that sense, this study shows the feasibility and validity of using driving simulators to assess road signage projects. Finally, some countermeasures were proposed in order to enhance both the road signaling of the studied segment and the road signs perception within the simulated driving environment.
273

An Eye-Tracking Evaluation of Multicultural Interface Designs

Shaw, Daniel January 2005 (has links)
Thesis advisor: James Gips / This paper examines the impact of a multicultural approach on the usability of web and software interface designs. Through the use of an eye-tracking system, the study compares the ability of American users to navigate traditional American and Japanese websites. The ASL R6 eye-tracking system recorded user search latency and the visual scan path in locating specific items on the American and Japanese pages. Experimental results found statistically significant latency values when searching for left- or right-oriented navigation menus. Among the participants, visual observations of scan paths indicated a strong preference for initial movements toward the left. These results demonstrate the importance of manipulating web layouts and navigation menus for American and Japanese users. This paper further discusses the potential strengths resulting from modifications of interface designs to correspond with such cultural search tendencies, and suggestions for further research. / Thesis (BA) — Boston College, 2005. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: Computer Science. / Discipline: College Honors Program.
274

Nas partituras das emoções: processamento de estímulos afetivos musicais e visuais em crianças e adolescentes com Síndrome de Williams / In scores of emotions: processing of musical and visual affective stimuli in children and adolescents with Williams Syndrome

Andrade, Nara Cortes 18 December 2017 (has links)
Compreender as bases do comportamento social e do desenvolvimento socioafetivo humano é essencial tanto para indivíduos com desenvolvimento típico (DT) quanto com transtornos neuropsiquiátricos. A Síndrome de Williams (SW) é uma condição neurogenética rara ocasionada pela deleção de aproximadamente 28 genes no cromossomo 7q11.23. A sintomatologia inclui desde dismorfismos faciais a alterações do funcionamento cognitivo e socioafetivo, com a presença de deficiência intelectual de grau leve a moderado. O processamento de estímulos afetivos tem sido foco de grande interesse em indivíduos com SW. Apesar de parte das pesquisas apontarem que esta população tem habilidade preservada de reconhecimento de expressões facias de emoções positivas e prejuízos no reconhecimento de emoções negativas, este ainda não é um campo consensual. Ao mesmo tempo, estudos indicam maior interesse desta população em relação a música e diferenças no neuroprocessamento de trechos musicais com valência afetiva. O presente trabalho teve por objetivo caracterizar o processamento de estímulos afetivos musicais e visuais em crianças e adolescentes com Síndrome de Williams. O Estudo I buscou validar trechos musicais com valência afetiva em cultura brasileira e analisar o efeito do treino musical na compreensão de emoções em música. Músicas com valência afetiva foram avaliadas pelos participantes de maneira correspondente à emoção pretendida pelo compositor e de forma similar entre as populações brasileiras e canadenses. O efeito do treino musical sobre a habilidade de reconhecer as emoções em música tiveram maior impacto em emoções com maior grau de dificuldade para os participantes como todo. O Estudo II visou caracterizar o perfil musical de crianças e adolescentes com SW e diferenciar o processamento de estímulos afetivos musicais em crianças e adolescentes com SW com as de DT. Pessoas com SW foram avaliadas com maior habilidade musical global. Não foram encontradas diferenças no que diz respeito ao interesse por atividades musicais. O Estudo III teve como objetivos diferenciar habilidade de reconhecimento de emoções o padrão de rastreamento do olhar frente a estímulos afetivos visuais em crianças e adolescentes com SW e SW com sintomas de TEA (SW/TEA). Pessoas com SW desprenderam maior tempo de fixação nos olhos e em faces alegres quando comparadas a faces tristes. Resultados indicam diferença no reconhecimento de emoções e rastreamento de olhar em indivíduos com SW/TEA. Padrão de reconhecimento em estímulos musicais e visuais foi semelhante na população SW, com acentuado prejuízo no reconhecimento de emoções negativas e preservação do reconhecimento de emoções positivas. Este achado reforça a modularidade do processamento neurológico das emoções básicas. Crianças com SW reconheceram mais facilmente estímulos musicais de valência positiva em comparação aos visuais sugerindo que o domínio da música seja um ponto de força desta população / Understand the foundation of social behavior and human social and affective development is essential for both individuals with typical developmental (TD) and neuropsychiatric disorders. Williams Syndrome (WS) is a rare neurogenetic condition caused by the deletion of approximately 28 genes on chromosome 7q11.23. The symptomatology includes from facial dysmorphisms to changes in cognitive and social and affective functioning, with the presence of mild to moderate intellectual deficiency. The processing of affective stimuli has been a focus of great interest in individuals with WS. Although part of the research indicates that this population has preserved ability to recognize face expressions of positive emotions and impairment in the recognition of negative emotions, this is not yet a consensual field. At the same time, studies indicate greater interest of this population in relation to music and differences in the neuroprocessing of musical excerpts with affective valence. The present work aimed to characterize the processing of musical and visual affective stimuli in children and adolescents with Williams Syndrome. Study I sought to validate musical excerpts with affective valence in Brazilian culture and to analyze the effect of musical training on the understanding of emotions in music. Songs with affective valence were evaluated by the participants corresponding to the emotion pretended by the composer and similarly between the Brazilian and Canadian populations. The effect of musical training on the ability to recognize emotions in music has had a greater impact on emotions with a greater degree of difficulty for participants as a whole. Study II aimed to characterize the musical profile of children and adolescents with WS and to differentiate the processing of musical affective stimuli in children and adolescents with WS compered to TD. People with WS were assessed with greater overall musical ability. No differences were found regarding the interest in musical activities. The aim of Study III was to differentiate between the ability to recognize emotions and the pattern of eye tracking in relation to visual affective stimuli in children and adolescents with SW and WS with ASD symptoms. People with SW gave more fixation time to the eyes and happy faces when compared to sad faces. Results indicate difference in the recognition of emotions and eye tracking in individuals with SW / ASD. Recognition pattern in musical and visual stimuli was similar in the WS population, with marked impairment in the recognition of negative emotions and preservation of the recognition of positive emotions. This finding reinforces the modularity of neurological processing of basic emotions. Children with WS recognized easily positive musical stimuli compared to visual ones suggesting that the domain of music is the strength of this population
275

The spatiotemporal dynamics of visual attention during real-world event perception

Ringer, Ryan January 1900 (has links)
Doctor of Philosophy / Department of Psychological Sciences / Lester Loschky / Everyday event perception requires us to perceive a nearly constant stream of dynamic information. Although we perceive these events as being continuous, there is ample evidence that we “chunk” our experiences into manageable bits (Zacks & Swallow, 2007). These chunks can occur at fine and coarse grains, with fine event segments being nested within coarse-grained segments. Individual differences in boundary detection are important predictors for subsequent memory encoding and retrieval and are relevant to both normative and pathological spectra of cognition. However, the nature of attention in relation to event structure is not yet well understood. Attention is the process which suppresses irrelevant information while facilitating the extraction of relevant information. Though attentional changes are known to occur around event boundaries, it is still not well understood when and where these changes occur. A newly developed method for measuring attention, the Gaze-Contingent Useful Field of View Task (GC-UFOV; Gaspar et al., 2016; Ringer, Throneburg, Johnson, Kramer, & Loschky, 2016; Ward et al., 2018) provides a means of measuring attention across the visual field (a) in simulated real-world environments and (b) independent of eccentricity-dependent visual constraints. To measure attention, participants performed the GC-UFOV task while watching pre-segmented videos of everyday activities (Eisenberg & Zacks, 2016; Sargent et al., 2013). Attention was probed from 4 seconds prior to 6 seconds after coarse, fine, and non-event boundaries. Afterward, participants’ memories for objects and event order were tested, followed by event segmentation. Attention was predicted to either become impaired (attentional impairment hypothesis), or it was predicted to be broadly distributed at event boundaries and narrowed at event middles (the ambient-to-focal shift hypothesis). The results showed marginal evidence for both attentional impairment and ambient-to-focal shift hypotheses, however model fitness was equal for both models. The results of this study were then used to develop a proposed program of research to further explore the nature of attention during event perception, as well as the ability of these two hypotheses to explain the relationship between attention and memory during real-world event perception.
276

Avaliação de desempenho de algoritmos de estimação do olhar para interação com computadores vestíveis / Performance evaluation of eye tracking algorithms for wearable computer interaction

Aluani, Fernando Omar 08 December 2017 (has links)
Cada vez mais o rastreamento do olhar tem sido usado para interação humano-computador em diversos cenários, como forma de interação (usualmente substituindo o mouse, principalmente para pessoas com deficiências físicas) ou estudo dos padrões de atenção de uma pessoa (em situações como fazendo compras no mercado, olhando uma página na internet ou dirigindo um carro). Ao mesmo tempo, dispositivos vestíveis tais quais pequenas telas montadas na cabeça e sensores para medir dados relativos à saúde e exercício físico realizado por um usuário, também têm avançado muito nos últimos anos, finalmente chegando a se tornarem acessíveis aos consumidores. Essa forma de tecnologia se caracteriza por dispositivos que o usuário usa junto de seu corpo, como uma peça de roupa ou acessório. O dispositivo e o usuário estão em constante interação e tais sistemas são feitos para melhorar a execução de uma ação pelo usuário (por exemplo dando informações sobre a ação em questão) ou facilitar a execução de várias tarefas concorrentemente. O uso de rastreadores de olhar em computação vestível permite uma nova forma de interação para tais dispositivos, possibilitando que o usuário interaja com eles enquanto usa as mãos para realizar outra ação. Em dispositivos vestíveis, o consumo de energia é um fator importante do sistema que afeta sua utilidade e deve ser considerado em seu design. Infelizmente, rastreadores oculares atuais ignoram seu consumo e focam-se principalmente em precisão e acurácia, seguindo a ideia de que trabalhar com imagens de alta resolução e frequência maior implica em melhor desempenho. Porém tratar mais quadros por segundo ou imagens com resolução maior demandam mais poder de processamento do computador, consequentemente aumentando o gasto energético. Um dispositivo que seja mais econômico tem vários benefícios, por exemplo menor geração de calor e maior vida útil de seus componentes eletrônicos. Contudo, o maior impacto é o aumento da duração da bateria para dispositivos vestíveis. Pode-se economizar energia diminuindo resolução e frequência da câmera usada, mas os efeitos desses parâmetros na precisão e acurácia da estimação do olhar não foram investigados até o presente. Neste trabalho propomos criar uma plataforma de testes, que permita a integração de alguns algoritmos de rastreamento de olhar disponíveis, tais como Starburst, ITU Gaze Tracker e Pupil, para estudar e comparar o impacto da variação de resolução e frequência na acurácia e precisão dos algoritmos. Por meio de um experimento com usuários analisamos o desempenho e consumo desses algoritmos sob diversos valores de resolução e frequência. Nossos resultados indicam que apenas a diminuição da resolução de 480 para 240 linhas (mantendo a proporção da imagem) já acarreta em ao menos 66% de economia de energia em alguns rastreadores sem perda significativa de acurácia. / Eye tracking has been used more and more in human-computer interaction in several scenarios, as a form of interaction (mainly replacing the mouse for the physically handicapped) or as a means to study attention patterns of a person (performing activities such as grocery shopping, reading web pages or driving a car). At the same time, wearable devices such as small head-mounted screens and health-related sensors, have improved considerably in these years, finally becoming accessible to mainstream consumers. This form of technology is defined by devices that an user uses alongside his body, like a piece of clothing or accessory. The device and the user are in constant interaction and such systems are usually made to improve the user\'s ability to execute a task (for example, by giving contextualized information about the task in question) or to facilitate the parallel execution of several tasks. The use of eye trackers in wearable computing allows a new form of interaction in these devices, allowing the user to interact with them while performing another action with his hands. In wearable devices, the energy consumption is an important factor of the system which affects its utility and must be considered in its design. Unfortunately, current eye trackers ignore energy consumption and instead mainly focus on precision and accuracy, following the idea that working with higher resolution and higher frequency images will improve performance. However, processing more frames, or larger frames, per second require more computing power, consequentially increasing energy expense. A device that is more economical has several benefits, such as less heat generation and a greater life-span of its components. Yet the greatest impact is the increased battery duration for the wearable devices. Energy can be saved by lowering the frequency and resolution of the camera used by the tracker, but the effect of these parameters in the precision and accuracy of eye tracking have not been researched until now. In this work we propose an eye tracking testing platform, that allows integration with existing eye tracking algorithms such as Starburst, ITU Gaze Tracker and Pupil, to study and compare the impact of varying the resolution and frequency of the camera on accuracy and precision of the algorithms. Through a user experiment we analyzed the performance and consumption of these algorithms under various resolution and frequency values. Our result indicate that only lowering the resolution from 480 to 240 lines (keeping the image aspect ratio) already amounts to a 66% energy economy in some trackers without any meaningful loss of accuracy.
277

EyeSwipe: text entry using gaze paths / EyeSwipe: entrada de texto usando gestos do olhar

Kurauchi, Andrew Toshiaki Nakayama 30 January 2018 (has links)
People with severe motor disabilities may communicate using their eye movements aided by a virtual keyboard and an eye tracker. Text entry by gaze may also benefit users immersed in virtual or augmented realities, when they do not have access to a physical keyboard or touchscreen. Thus, both users with and without disabilities may take advantage of the ability to enter text by gaze. However, methods for text entry by gaze are typically slow and uncomfortable. In this thesis we propose EyeSwipe as a step further towards fast and comfortable text entry by gaze. EyeSwipe maps gaze paths into words, similarly to how finger traces are used on swipe-based methods for touchscreen devices. A gaze path differs from the finger trace in that it does not have clear start and end positions. To segment the gaze path from the user\'s continuous gaze data stream, EyeSwipe requires the user to explicitly indicate its beginning and end. The user can quickly glance at the vicinity of the other characters that compose the word. Candidate words are sorted based on the gaze path and presented to the user. We discuss two versions of EyeSwipe. EyeSwipe 1 uses a deterministic gaze gesture called Reverse Crossing to select both the first and last letters of the word. Considering the lessons learned during the development and test of EyeSwipe 1 we proposed EyeSwipe 2. The user emits commands to the interface by switching the focus between regions. In a text entry experiment comparing EyeSwipe 2 to EyeSwipe 1, 11 participants achieved an average text entry rate of 12.58 words per minute (wpm) with EyeSwipe 1 and 14.59 wpm with EyeSwipe 2 after using each method for 75 minutes. The maximum entry rates achieved with EyeSwipe 1 and EyeSwipe 2 were, respectively, 21.27 wpm and 32.96 wpm. Participants considered EyeSwipe 2 to be more comfortable and faster, while less accurate than EyeSwipe 1. Additionally, with EyeSwipe 2 we proposed the use of gaze path data to dynamically adjust the gaze estimation. Using data from the experiment we show that gaze paths can be used to dynamically improve gaze estimation during the interaction. / Pessoas com deficiências motoras severas podem se comunicar usando movimentos do olhar com o auxílio de um teclado virtual e um rastreador de olhar. A entrada de texto usando o olhar também beneficia usuários imersos em realidade virtual ou realidade aumentada, quando não possuem acesso a um teclado físico ou tela sensível ao toque. Assim, tanto usuários com e sem deficiência podem se beneficiar da possibilidade de entrar texto usando o olhar. Entretanto, métodos para entrada de texto com o olhar são tipicamente lentos e desconfortáveis. Nesta tese propomos o EyeSwipe como mais um passo em direção à entrada rápida e confortável de texto com o olhar. O EyeSwipe mapeia gestos do olhar em palavras, de maneira similar a como os movimentos do dedo em uma tela sensível ao toque são utilizados em métodos baseados em gestos (swipe). Um gesto do olhar difere de um gesto com os dedos em que ele não possui posições de início e fim claramente definidas. Para segmentar o gesto do olhar a partir do fluxo contínuo de dados do olhar, o EyeSwipe requer que o usuário indique explicitamente seu início e fim. O usuário pode olhar rapidamente a vizinhança dos outros caracteres que compõe a palavra. Palavras candidatas são ordenadas baseadas no gesto do olhar e apresentadas ao usuário. Discutimos duas versões do EyeSwipe. O EyeSwipe 1 usa um gesto do olhar determinístico chamado Cruzamento Reverso para selecionar tanto a primeira quanto a última letra da palavra. Levando em consideração os aprendizados obtidos durante o desenvolvimento e teste do EyeSwipe 1 nós propusemos o EyeSwipe 2. O usuário emite comandos para a interface ao trocar o foco entre as regiões do teclado. Em um experimento de entrada de texto comparando o EyeSwipe 2 com o EyeSwipe 1, 11 participantes atingiram uma taxa de entrada média de 12.58 palavras por minuto (ppm) usando o EyeSwipe 1 e 14.59 ppm com o EyeSwipe 2 após utilizar cada método por 75 minutos. A taxa de entrada de texto máxima alcançada com o EyeSwipe 1 e EyeSwipe 2 foram, respectivamente, 21.27 ppm e 32.96 ppm. Os participantes consideraram o EyeSwipe 2 mais confortável e rápido, mas menos preciso do que o EyeSwipe 1. Além disso, com o EyeSwipe 2 nós propusemos o uso dos dados dos gestos do olhar para ajustar a estimação do olhar dinamicamente. Utilizando dados obtidos no experimento mostramos que os gestos do olhar podem ser usados para melhorar a estimação dinamicamente durante a interação.
278

Automatic Detection of Cognitive Load and User's Age Using a Machine Learning Eye Tracking System

Shojaeizadeh, Mina 18 April 2018 (has links)
As the amount of information captured about users increased over the last decade, interest in personalized user interfaces has surged in the HCI and IS communities. Personalization is an effective means for accommodating for differences between individuals. The fundamental idea behind personalization rests on the notion that if a system can gather useful information about the user, generate a relevant user model and apply it appropriately, it would be possible to adapt the behavior of a system and its interface to the user at the individual level. Personal-ization of a user interface features can enhance usability. With recent technological advances, personalization can be achieved automatically and unobtrusively. A user interface can deploy a NeuroIS technology such as eye-tracking that learns from the user's visual behavior to provide users an experience most unique to them. The advantage of eye-tracking technology is that subjects cannot consciously manipulate their responses since they are not readily subject to manipulation. The objective of this dissertation is to develop a theoretical framework for user personalization during reading comprehension tasks based on two machine learning (ML) models. The proposed ML-based profiling process consists of user's age characterization and user's cognitive load detection, while the user reads text. To this end, detection of cognitive load through eye-movement features was investigated during different cognitive tasks (see Chapters 3, 4 and 6) with different task conditions. Furthermore, in separate studies (see Chapters 5 and 6) the relationship between user's eye-movements and their age population (e.g., younger and older generations) were carried out during a reading comprehension task. A Tobii X300 eye tracking device was used to record the eye movement data for all studies. Eye-movement data was acquired via Tobii eye tracking software, and then preprocessed and analyzed in R for the aforementioned studies. Machine learning techniques were used to build predictive models. The aggregated results of the studies indicate that machine learning accompanied with a NeuroIS tool like eye-tracking, can be used to model user characteristics like age and user mental states like cognitive load, automatically and implicitly with accuracy above chance (range of 70-92%). The results of this dissertation can be used in a more general framework to adaptively modify content to better serve the users mental and age needs. Text simplification and modification techniques might be developed to be used in various scenarios.
279

Riktade ljudeffekters påverkan på den visuella uppmärksamheten : En ögonrörelsestudie

Hjärkéus, Erik, Larsson, Tim January 2018 (has links)
Eftersom att dagens biografer utvecklas med mer avancerad teknik för att rikta ljudeffekter på flera olika sätt är det viktigt att förstå vad den riktade ljudeffekten har för påverkan på betraktarna. Vi har undersökt hur den visuella uppmärksamheten skiljer sig vid uppspelning av riktade och ej riktade ljudeffekter. Detta har vi gjort med hjälp av en ögonrörelsestudie där deltagare har fått betrakta fyra anpassade filmsekvenser som vi har skapat själva. För att analysera vår data har vi använt oss av analysmetoden area of interest. Vårt resultat tyder på att det finns en skillnad i den visuella uppmärksamheten där vi till en viss grad kan styra tittarna med riktade ljudeffekter. Denna kunskap gör att filmskapare kan använda sig av riktade ljudeffekter som ett verktyg för att styra sin publik.
280

The role of working memory in comprehension of doubly embedded relative clauses: a self-paced reading and eye tracking study

Garbarino, Julianne T. January 2013 (has links)
Language processing has been a focus of working memory research since Baddeley introduced his Model of Working Memory in the 1970’s. There has been continued discussion over whether the same working memory (WM) system that underlies verbally-mediated tasks relying on conscious, controlled, processing also provides the resources used in language processing. Recently, Caplan, DeDe, Waters, & Michaud (2011) found that increased reading times at only the most difficult point of the most difficult sentences presented in their study (sentences with doubly embedded relative clauses) correlated with improved comprehension. They hypothesized that this correlation occurs because at these points where normal parsing fails, individuals with high working memory capacities use ancillary comprehension mechanisms that rely on verbal working memory. Caplan and Waters (2013) proposed that use of verbal working memory for ancillary comprehension in sentence processing may appear behaviorally as improved comprehension with longer reading times in self-paced reading tasks and as regressive eye movements out of these points where parsing is thought to fail. This thesis attempted to replicate the above mentioned finding of Caplan et al. (2011). This study also added an eyetracking task to enable measurement of regressive eye movements and a measure of working memory to permit analysis of individual differences. Forty-eight healthy adults completed a working memory battery (alphabet span, subtract two span, and sentence span), a self-paced reading task, and an eye-tracking task. For the self-paced reading and eye tracking components, participants read sentences with doubly embedded relative clauses and parallel sentences with sentential complements. Linear mixed effects models found increased self-paced reading times and go-past times at the hardest point in the harder sentences (those with doubly embedded relative clauses) as working memory increased. These results support the hypothesis that ancillary comprehension mechanisms are used in sentence processing at points where comprehension is extremely difficult. In the attempted replication of the findings of Caplan et al. (2011), logistic mixed effects models showed increased accuracy as reading times increased at the hardest point in the harder sentences, and also as reading times increased at five of the other seven segments. Logistic mixed effects models showed no significant increase in regressions out of the hardest point in harder than in the easier sentences as working memory increased. These results can be taken as further evidence, using eye tracking methods combined with self-paced reading and measurement of working memory, that ancillary comprehension mechanisms may be used in sentence processing when the limits of the normal parser are exceeded.

Page generated in 0.0438 seconds