• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 51
  • 21
  • 9
  • 5
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 238
  • 238
  • 61
  • 52
  • 49
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Facilitatory and Inhibitory Effects of Implicit Spatial Cues on Visuospatial Attention

Ghara Gozli, Davood 07 December 2011 (has links)
Previous work suggests that both concrete (e.g., hat, shoes) and abstract (e.g., god, devil) concepts with spatial associations engage attentional mechanisms, affecting subsequent target processing above or below fixation. Interestingly, both facilitatory and inhibitory effects have been reported to result from compatibility between target location and the meaning of the concept. To determine the conditions for obtaining these disparate effects, we varied the task (detection vs. discrimination), SOA, and concept type (abstract vs. concrete) across a series of experiments. Results suggest that the nature of the concepts underlies the different attentional effects. With abstract concepts, facilitation was observed across tasks and SOAs. With concrete concepts, inhibition was observed during the discrimination task and for short SOAs. Thus, the particular perceptual and metaphorical associations of a concept mediate their subsequent effects on visual target processing.
12

Facilitatory and Inhibitory Effects of Implicit Spatial Cues on Visuospatial Attention

Ghara Gozli, Davood 07 December 2011 (has links)
Previous work suggests that both concrete (e.g., hat, shoes) and abstract (e.g., god, devil) concepts with spatial associations engage attentional mechanisms, affecting subsequent target processing above or below fixation. Interestingly, both facilitatory and inhibitory effects have been reported to result from compatibility between target location and the meaning of the concept. To determine the conditions for obtaining these disparate effects, we varied the task (detection vs. discrimination), SOA, and concept type (abstract vs. concrete) across a series of experiments. Results suggest that the nature of the concepts underlies the different attentional effects. With abstract concepts, facilitation was observed across tasks and SOAs. With concrete concepts, inhibition was observed during the discrimination task and for short SOAs. Thus, the particular perceptual and metaphorical associations of a concept mediate their subsequent effects on visual target processing.
13

Visual Attention for Robotic Cognition: A Biologically Inspired Probabilistic Architecture

Begum, Momotaz January 2010 (has links)
The human being, the most magnificent autonomous entity in the universe, frequently takes the decision of `what to look at' in their day-to-day life without even realizing the complexities of the underlying process. When it comes to the design of such an attention system for autonomous robots, all of a sudden this apparently simple task appears to be an extremely complex one with highly dynamic interaction among motor skills, knowledge and experience developed throughout the life-time, highly connected circuitry of the visual cortex, and super-fast timing. The most fascinating thing about visual attention system of the primates is that the underlying mechanism is not precisely known yet. Different influential theories and hypothesis regarding this mechanism, however, are being proposed in psychology and neuroscience. These theories and hypothesis have encouraged the research on synthetic modeling of visual attention in computer vision, computational neuroscience and, very recently, in AI robotics. The major motivation behind the computational modeling of visual attention is two-fold: understanding the mechanism underlying the cognition of the primates' and using the principle of focused attention in different real-world applications, e.g. in computer vision, surveillance, and robotics. Accordingly, we observe the rise of two different trends in the computational modeling of visual attention. The first one is mostly focused on developing mathematical models which mimic, as much as possible, the details of the primates' attention system: the structure, the connectivity among visual neurons and different regions of the visual cortex, the flow of information etc. Such models provide a way to test the theories of the primates' visual attention with minimal involvement from the live subjects. This is a magnificent way to use technological advancement for the understanding of human cognition. The second trend in computational modeling, on the other hand, uses the methodological sophistication of the biological processes (like visual attention) to advance the technology. These models are mostly concerned with developing a technical system of visual attention which can be used in real-world applications where the principle of focused attention might play a significant role for redundant information management. This thesis is focused on developing a computational model of visual attention for robotic cognition and, therefore, belongs to the second trend. The design of a visual attention model for robotic systems as a component of their cognition comes with a number of challenges which, generally, do not appear in the traditional computer vision applications of visual attention. The robotic models of visual attention, although heavily inspired by the rich literature of visual attention in computer vision, adopt different measures to cope with these challenges. This thesis proposes a Bayesian model of visual attention designed specifically for robotic systems and, therefore, tackles the challenges involved with robotic visual attention. The operation of the proposed model is guided by the theory of biased competition, a popular theory from cognitive neuroscience describing the mechanism of primates' visual attention. The proposed Bayesian attention model offers a robot-centric approach of visual attention where the head-pose of a robot in the 3D world is estimated recursively such that the robot can focus on the most behaviorally relevant stimuli in its environment. The behavioral relevance of an object determined based on two criteria which are inspired by the postulates of the biased competitive hypothesis of visual attention in the primates. Accordingly, the proposed model encourages a robot to focus on novel stimuli or stimuli that have similarity with a `sought for' object depending on the context. In order to address a number of robot-specific issues of visual attention, the proposed model is further extended to the multi-modal case where speech commands from the human are used to modulate the visual attention behavior of the robot. The Bayes model of visual attention, inherited from the Bayesian sensor fusion characteristic, naturally accommodates multi-modal information during attention selection. This enables the proposed model to be the core component of an attention oriented speech-based human-robot interaction framework. Extensive experiments are performed in the real-world to investigate different aspects of the proposed Bayesian visual attention model.
14

Visual Attention for Robotic Cognition: A Biologically Inspired Probabilistic Architecture

Begum, Momotaz January 2010 (has links)
The human being, the most magnificent autonomous entity in the universe, frequently takes the decision of `what to look at' in their day-to-day life without even realizing the complexities of the underlying process. When it comes to the design of such an attention system for autonomous robots, all of a sudden this apparently simple task appears to be an extremely complex one with highly dynamic interaction among motor skills, knowledge and experience developed throughout the life-time, highly connected circuitry of the visual cortex, and super-fast timing. The most fascinating thing about visual attention system of the primates is that the underlying mechanism is not precisely known yet. Different influential theories and hypothesis regarding this mechanism, however, are being proposed in psychology and neuroscience. These theories and hypothesis have encouraged the research on synthetic modeling of visual attention in computer vision, computational neuroscience and, very recently, in AI robotics. The major motivation behind the computational modeling of visual attention is two-fold: understanding the mechanism underlying the cognition of the primates' and using the principle of focused attention in different real-world applications, e.g. in computer vision, surveillance, and robotics. Accordingly, we observe the rise of two different trends in the computational modeling of visual attention. The first one is mostly focused on developing mathematical models which mimic, as much as possible, the details of the primates' attention system: the structure, the connectivity among visual neurons and different regions of the visual cortex, the flow of information etc. Such models provide a way to test the theories of the primates' visual attention with minimal involvement from the live subjects. This is a magnificent way to use technological advancement for the understanding of human cognition. The second trend in computational modeling, on the other hand, uses the methodological sophistication of the biological processes (like visual attention) to advance the technology. These models are mostly concerned with developing a technical system of visual attention which can be used in real-world applications where the principle of focused attention might play a significant role for redundant information management. This thesis is focused on developing a computational model of visual attention for robotic cognition and, therefore, belongs to the second trend. The design of a visual attention model for robotic systems as a component of their cognition comes with a number of challenges which, generally, do not appear in the traditional computer vision applications of visual attention. The robotic models of visual attention, although heavily inspired by the rich literature of visual attention in computer vision, adopt different measures to cope with these challenges. This thesis proposes a Bayesian model of visual attention designed specifically for robotic systems and, therefore, tackles the challenges involved with robotic visual attention. The operation of the proposed model is guided by the theory of biased competition, a popular theory from cognitive neuroscience describing the mechanism of primates' visual attention. The proposed Bayesian attention model offers a robot-centric approach of visual attention where the head-pose of a robot in the 3D world is estimated recursively such that the robot can focus on the most behaviorally relevant stimuli in its environment. The behavioral relevance of an object determined based on two criteria which are inspired by the postulates of the biased competitive hypothesis of visual attention in the primates. Accordingly, the proposed model encourages a robot to focus on novel stimuli or stimuli that have similarity with a `sought for' object depending on the context. In order to address a number of robot-specific issues of visual attention, the proposed model is further extended to the multi-modal case where speech commands from the human are used to modulate the visual attention behavior of the robot. The Bayes model of visual attention, inherited from the Bayesian sensor fusion characteristic, naturally accommodates multi-modal information during attention selection. This enables the proposed model to be the core component of an attention oriented speech-based human-robot interaction framework. Extensive experiments are performed in the real-world to investigate different aspects of the proposed Bayesian visual attention model.
15

Evaluating interactions of task relevance and visual attention in driver multitasking

Garrison, Teena Marie 10 December 2010 (has links)
Use of cellular phones while driving, and safety implications thereof, has captured public and scientific interest. Previous research has shown that driver reactions and attention are impacted by cellular phone use. Generally, previous research studies have not focused on how visual attention and driver performance may interact. Strayer and colleagues found lower recognition for items present in the driving environment when drivers were using a cellular phone than when not using the phone; however, the tested items were not directly relevant to driving. Relevance to driving may have an impact on attention allocation. The current project used a mediumidelity driving simulator to extend previous research in two ways: 1) how attention is allocated across driving-relevant and -irrelevant items in the environment was investigated, and 2) driving performance measures and eye movement measures were considered together rather than in isolation to better illustrate the impact of cellular phone distraction on driver behavior. Results from driving performance measures replicated previous findings that vehicle control is negatively impacted by driver distraction. Interestingly, there were no interactions of relevance and distraction found, suggesting that participants responded to potential hazards similarly in driving-only and distraction conditions. In contrast to previous research, eye movement patterns (primarily measured by number of gazes) were impacted by distraction. Gaze patterns differed across relevance levels, with hazards receiving the most gazes, and signs receiving the fewest. The relative size of the critical items may have impacted gaze probability in this relatively undemanding driving environment. In contrast to the driving performance measures, the eye movement measures did show an interaction between distraction and relevance; thus, eye movements may be a more direct and more sensitive measure of driver attention. Recognition memory results were consistently near chance performance levels and did not reflect the patterns found in the eye movement or driving performance measures.
16

Selective visual attention to novelty in elderly with senile dementia of the Alzheimer's type

Engelhardt, Nina January 1994 (has links)
No description available.
17

Modelling eye movements and visual attention in synchronous visual and linguistic processing

Dziemianko, Michal January 2013 (has links)
This thesis focuses on modelling visual attention in tasks in which vision interacts with language and other sources of contextual information. The work is based on insights provided by experimental studies in visual cognition and psycholinguistics, particularly cross-modal processing. We present a series of models of eye-movements in situated language comprehension capable of generating human-like scan-paths. Moreover we investigate the existence of high level structure of the scan-paths and applicability of tools used in Natural Language Processing in the analysis of this structure. We show that scan paths carry interesting information that is currently neglected in both experimental and modelling studies. This information, studied at a level beyond simple statistical measures such as proportion of looks, can be used to extract knowledge of more complicated patterns of behaviour, and to build models capable of simulating human behaviour in the presence of linguistic material. We also revisit classical model saliency and its extensions, in particular the Contextual Guidance Model of Torralba et al. (2006), and extend it with memory of target positions in visual search. We show that models of contextual guidance should contain components responsible for short term learning and memorisation. We also investigate the applicability of this type of model to prediction of human behaviour in tasks with incremental stimuli as in situated language comprehension. Finally we investigate the issue of objectness and object saliency, including their effects on eye-movements and human responses to experimental tasks. In a simple experiment we show that when using an object-based notion of saliency it is possible to predict fixation locations better than using pixel-based saliency as formulated by Itti et al. (1998). In addition we show that object based saliency fits into current theories such as cognitive relevance and can be used to build unified models of cross-referential visual and linguistic processing. This thesis forms a foundation towards a more detailed study of scan-paths within an object-based framework such as Cognitive Relevance Framework (Henderson et al., 2007, 2009) by providing models capable of explaining human behaviour, and the delivery of tools and methodologies to predict which objects would be attended to during synchronous visual and linguistic processing.
18

Region of Interest Aware and Impairment Based Image Quality Assessment

Chandu, Chiranjeevi January 2016 (has links)
No description available.
19

Conversational topic moderates visual attention to faces in autism spectrum disorder

Brien, Ashley Rae 01 January 2015 (has links)
Autism Spectrum Disorder (ASD) is often accompanied by atypical visual attention to faces. Previous studies have identified some predictors of atypical visual attention in ASD but very few have explored the role of conversational context. In this study, the fixation patterns of 19 typically developing (TD) children and 18 children with ASD were assessed during a SKYPED conversation where participants were asked to converse about mundane vs. emotion-laden topics. We hypothesized that 1) children with ASD would visually attend less to the eye region and more to the mouth region of the face compared to TD children and that 2) this effect would be exaggerated in the emotion-laden conversation. With regard to hypothesis 1, we found no difference between groups for either number of fixations or fixation time; however, children with ASD did evidence significantly more off-screen looking time compared to their TD peers. An additional analysis showed that compared to the TD group, the ASD group also had greater average fixation durations when looking at their speaking partner's face (both eyes and mouth) across conversational contexts. In support of hypothesis 2, eye tracking data (corrected for amount of time during conversation) revealed two interaction effects. Compared to the TD group, the ASD group showed 1) a decreased number of fixations to eyes and 2) an increased fixation time to mouths but only in the emotion-laden conversation. We also examined variables that predicted decreased number of eye fixations and increased mouth-looking in ASD in the emotion-laden conversation. Change scores (to be understood as the degree of visual attention shifting from the mundane to the emotion-laden condition) for the ASD group negatively correlated with age, perceptual reasoning skills, verbal ability, general IQ, theory of mind (ToM) competence, executive function (EF) subscales, and positively correlated with autism severity. Cognitive mechanisms at play and implications for theory and clinical practice are considered.
20

Attention visuelle et traitements pré-orthographiques dans la lecture et la dyslexie / Paying visual attention to pre-orthographic processing in reading and developmental dyslexia.

Lobier, Muriel 14 December 2011 (has links)
Ce travail de thèse a pour objectif d'explorer le rôle de l'attention visuelle dans les traitements visuels précoces en lecture. Il repose sur le cadre théorique du modèle mémoire multitrace de lecture (MTM) et de l'hypothèse du trouble de l'empan visuo-attentionel (VA) dans la dyslexie développementale. La capacité d'attention visuelle est opérationnalisée par l'empan VA, défini comme le nombre maximal d'éléments visuels qui peuvent être traités en parallèle. L'empan VA contribue significativement aux performances en lecture et est réduit chez une partie de la population dyslexique. Une série de trois études a exploré les relations entre l'attention visuelle, l'empan VA et la lecture chez le normo-lecteur. Dans une première étude, nous avons testé si le traitement des lettres dans les tâches de report de l'empan VA était parallèle ou sériel. Dans une deuxième étude, nous avons cherché à spécifier le rôle de l'attention visuelle dans l'empan VA et la lecture. Enfin, dans une troisième étude, nous voulions mettre en évidence à l'aide de l'IRMf l'implication des réseaux neuronaux de l'attention visuelle dans les traitements pré-orthographiques. Ces trois études suggèrent que l'attention visuelle est une composante importante des traitements pré-orthographiques de séquences de lettres. L'objectif de la deuxième série d'études était de montrer qu'un déficit de l'attention visuelle est l'explication la plus plausible du déficit de l'empan VA. Dans une quatrième étude, nous avons confronté les prédictions d'une explication phonologique ou visuelle du déficit de l'empan VA pour une tâche de catégorisation de séquences de caractères. Enfin, une dernière étude a évalué les corrélats neuronaux associés au traitement visuel de séquences de caractères chez des adultes dyslexiques avec trouble de l'empan VA. Ces deux dernières études suggèrent que le trouble sous-jacent au déficit de l'empan VA est bien une réduction de la capacité d'attention visuelle et non pas un déficit des capacités de recodage verbal. / This doctoral thesis aims to investigate the role of visual attention in the visual front-end of reading. It is grounded in the theoretical framework of the multi trace memory model (MTM) of reading and of the visual attention (VA) span deficit hypothesis of developmental dyslexia. Visual attention capacity in the MTM model is operationalized by the VA span, defined as the maximum number of individual visual elements that can be processed in parallel. VA span contributes significantly to reading performance in normal reading children and is selectively impaired in a subset of the dyslexic population. Three studies investigated the role of visual attention in VA span and visual word recognition in normal reading. A first study tested whether letter processing in the VA span whole report task was parallel or serial. A second study specified the role of visual attention in VA span and reading speed. Finally, a third study used fMRI to investigate whether pre-orthographic processing involves neural networks of visual attention. These three studies argue for visual attention as an important component of pre-orthographic processing. A second series of studies aimed to show that an impairment of visual attention best accounts for the VA span deficit. In a fourth study, predictions of phonological and visual accounts of the VA span deficit were tested using a multiple character categorization task. Finally, a fifth study explored the neural correlates of multiple character processing in VA span impaired adults. These last two studies argue for reduced visual attention capacity and not poor verbal recoding abilities as the underlying cause for the VA span deficit.

Page generated in 0.1269 seconds