• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 10
  • 6
  • 6
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 133
  • 133
  • 44
  • 44
  • 28
  • 25
  • 16
  • 16
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Does Crowding Obscure the Presence of Attentional Guidance in Contextual Cueing?

Fiske, Steven William 01 January 2012 (has links)
The contextual cueing effect was initially thought to be the product of memory guiding attention to the target location. However, the steep search slopes obtained in contextual cueing indicate an absence of attentional guidance. We hypothesized that crowding could be obscuring the presence of attentional guidance and investigated this possibility in 2 experiments. Crowding was manipulated by varying the density of items in the local target region in a contextual cueing task. We observed a significant reduction in search slopes between the novel and repeated conditions when crowding was reduced. Enhancing crowding eliminated the contextual cueing effect. These findings suggest that increased crowding at larger set sizes attenuates the memory-based attentional guidance in contextual cueing thereby producing steep search slopes.
62

Improving the performance of airport luggage inspection by providing cognitive and perceptual supports to screeners

Liu, Xi January 2008 (has links)
Recently concern about aviation security has focused on the work of airport security screeners who detect threat items in passengers' luggage. An effective method of training and screening is required for improving screeners' detection abilities and performance to cope with the unreliable human performance of screening. The overall aim of this thesis is to understand and define the potential visual and cognitive factors in the task of inspecting airport passengers' X-ray luggage images, examine usability of perceptual feedback in this demanding task and develop a new method of salient regions which assist screeners to detect targets. The result of this work would obtain knowledge and skills of X-ray luggage images examination, provide insight into the design of training system and develop a method to significantly enhance screeners' detection ability. A questionnaire was developed for screeners to extract the expertise of the screening task and investigate the effect of image features on visual attention. A series of experiments were designed to understand the screening task and explore how knowledge and skills are developed with practice. Results indicated that training under time stressed conditions is recommended for ensuring adequate high detection ability in real life situation as screeners have to balance accuracy and speed in time pressure. The advantages of screeners are better detection ability and search skills which were gained by experience of the search task. Hit rate of naive people was improved with the perceptual exposure of images of threat items. However, scanning did not become efficient. It has demonstrated that detection performance and search skills are improved by the practice of frequency exposure targets in the search task and such ability partly transfer to novel targets. Learning in visual search of threat items is stimuli specific such that familiarity with stimulus and task is the source of performance enhancement. Threat items should be updated constantly and massive amount of X-ray threat objects should be employed for airport security screeners training so as to enlarge object knowledge and enhance recognition ability. Perceptual feedback of circling areas with dwell duration longer than 1000ms does not Significantly improve observers' detection ability in the airport screening task. Features of bags and threat items influence initial attention and attention allocation in the search process. Salient regions, based on the pure stimulus properties, not only contain most of targets in X-ray images but also improve observers' detection performance of high hit rate by forcing observers to scrutinize these areas carefully.
63

Akių judesiai vizualinėje paieškoje / Eye movements during visual search

Jagminienė, Lina 09 June 2005 (has links)
The theme of Master project of Electronical engineer is actual, because humans vision depends on possibility to perform sacadic eye movements and orient the eye so, that person could see concern targets in the view. The main sacadic movements use precipitated studies of characterizing their features. The purpose of the theme is to find what can eye movements show about rewiew and perception of the view. Experiments are performed using contact-free eye movement research method. During experiment was filmed the eye of the investigative person. In the work we wiewed ower methods of eye movement research and earlier performed eye movements research during review of the view. Also we performed two new experiments and did such conclusions: 1. Persons can control sacadic movement and instead of one movement to memorized target (when primary review is available) is performing some sakades, who are typical. In most primary review cases the error of final point is lesser. 2. We proved hypothesis, which says that viewing over objects the look is shifted to places, who can give more information. 3. We proved hypothesis, which says that in the case of continually alterativing similar views arises lasstitude, which smoothes perception. The results of experiments we can use in computer adaptabillity for disable persons, publicity, warning signs, creation of automobiles and et. c.
64

Variation in Visual Search Abilities and Performance

Clark, Kait January 2014 (has links)
<p>Visual search, the process of detecting relevant items within an environment, is a vital skill required for navigating one's visual environment as well as for careers, such as radiology and airport security, that rely upon accurate searching. Research over the course of several decades has established that visual search requires the integration of low- and high-level cognitive processes, including sensory analysis, attentional allocation, target discrimination, and decision-making. Search abilities are malleable and vary in accordance with long-term experiences, direct practice, and contextual factors in the immediate environment; however, the mechanisms responsible for changes in search performance remain largely unclear. A series of studies examine variation in visual search abilities and performance and aim to identify the underlying mechanisms.</p><p>To assess differences associated with long-term experiences, visual search performance is compared between laypersons (typically undergraduates) and specific populations, including radiologists and avid action video game players. Behavioral markers of search processes are used to elucidate causes of enhanced search performance. To assess differences associated with direct practice, laypersons perform a visual search task over five consecutive days, and electrophysiological activity is recorded from the scalp on the first and last days of the protocol. Electrophysiological markers associated with specific stages of processing are analyzed to determine neurocognitive changes contributing to improved performance. To assess differences associated with contextual factors, laypersons are randomly assigned to experimental conditions in which they complete a visual search task within a particular framework or in the presence or absence of motivation, feedback, and/or time pressure.</p><p>Results demonstrate that search abilities can improve through experience and direct training, but the mechanisms underlying effects in each case are different. Long-term experiences are associated with strategic attentional allocation, but direct training can improve low-level sensory analysis in addition to higher-level processes. Results also demonstrate nuanced effects of experience and context. On searches that contain multiple targets, task framework impacts accuracy for detecting additional targets after one target has been identified. The combination of motivation and feedback enhances accuracy for both single- and multiple-target searches. Implications for cognitive theory and applications to occupational protocols are discussed.</p> / Dissertation
65

THE EFFECT OF PRACTICE ON EYE MOVEMENTS IN THE 1/D PARADIGM

Seidelman, Will 01 January 2011 (has links)
Previous studies have demonstrated that observers may ignore highly salient feature singletons during a conjunction search task through focusing the attentional window (Belopolsky, Zwaan, Theeuwes, & Kramer, 2007), or by the suppression of bottom-up information (Treisman & Sato, 1990). In the current study, observers’ eye movements were monitored while performing a search task in which a feature singleton was present and corresponded with the target at a chance level. With practice, observers were less likely to make an initial saccade toward the singleton item, but initial saccades directed at the target were likely throughout. Results demonstrate that, in an effort to ignore the singleton, observers were more likely to suppress bottom-up information than adjust the size of the attentional window.
66

Investigating the roles of features and priming in visual search

Hailston, Kenneth 01 June 2009 (has links)
Identifying and locating specific objects amidst irrelevant, distracting items can be difficult when one is unsure of where, or even what, to look for. Priming the perceptual/cognitive system for specific features or objects is one way of helping observers to locate and identify target items (e.g., Grice&Gwynne, 1985; Laarni and Hakkinen, 1994). Past research has demonstrated that priming single features does indeed affect search performance (e.g., Hailston&Davis, 2006; Huang&Pashler, 2005). But, what happens when more than one feature is primed? Does priming two features result in better performance than priming only one? What about three features? How does feature priming compare to simply priming the entire object itself? The current research addressed these questions with a series of three visual search experiments. In the first experiment performance in simple feature search was compared against triple-conjunction search performance. Three prominent models of visual search were compared to see which best predicted actual performance. In the second and third experiments the effects of multiple feature priming on search accuracy were examined in a triple-conjunction search (Experiment 2) and a whole-object search (Experiment 3). Moreover, in Experiment 3 the effectiveness of whole-object primes were compared to multiple-features primes. Results show that none of the three models can accurately predict performance in all cases, suggesting some modification of each is necessary. Furthermore, valid primes resulted in performance benefits, and these benefits increased with the number of primed features. Finally, no performance costs of invalid priming were observed in the current experiments.
67

Functional understanding of space : Representing spatial knowledge using concepts grounded in an agent's purpose

Sjöö, Kristoffer January 2011 (has links)
This thesis examines the role of function in representations of space by robots - that is, dealing directly and explicitly with those aspects of space and objects in space that serve some purpose for the robot. It is suggested that taking function into account helps increase the generality and robustness of solutions in an unpredictable and complex world, and the suggestion is affirmed by several instantiations of functionally conceived spatial models. These include perceptual models for the "on" and "in" relations based on support and containment; context-sensitive segmentation of 2-D maps into regions distinguished by functional criteria; and, learned predictive models of the causal relationships between objects in physics simulation. Practical application of these models is also demonstrated in the context of object search on a mobile robotic platform. / QC 20111125
68

An "active vision" computational model of visual search for human-computer interaction

Halverson, Timothy E., 1971- 12 1900 (has links)
xx, 191 p. : ill. (some col.) A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / Visual search is an important part of human-computer interaction (HCI). The visual search processes that people use have a substantial effect on the time expended and likelihood of finding the information they seek. This dissertation investigates visual search through experiments and computational cognitive modeling. Computational cognitive modeling is a powerful methodology that uses computer simulation to capture, assert, record, and replay plausible sets of interactions among the many human processes at work during visual search. This dissertation aims to provide a cognitive model of visual search that can be utilized by predictive interface analysis tools and to do so in a manner consistent with a comprehensive theory of human visual processing, namely active vision. The model accounts for the four questions of active vision, the answers to which are important to both practitioners and researchers in HCI: What can be perceived in a fixation? When do the eyes move? Where do the eyes move? What information is integrated between eye movements? This dissertation presents a principled progression of the development of a computational model of active vision. Three experiments were conducted that investigate the effects of visual layout properties: density, color, and word meaning. The experimental results provide a better understanding of how these factors affect human- computer visual interaction. Three sets of data, two from the experiments reported here, were accurately modeled in the EPIC (Executive Process-Interactive Control) cognitive architecture. This work extends the practice of computational cognitive modeling by (a) informing the process of developing computational models through the use of eye movement data and (b) providing the first detailed instantiation of the theory of active vision in a computational framework. This instantiation allows us to better understand (a) the effects and interactions of visual search processes and (b) how these visual search processes can be used computationally to predict people's visual search behavior. This research ultimately benefits HCI by giving researchers and practitioners a better understanding of how users visually interact with computers and provides a foundation for tools to predict that interaction. This dissertation includes-both previously published and co-authored material. / Adviser: Anthony J. Hornof
69

Target "templates": How the precision of mental representations affects attentional guidance and decision-making in visual search

January 2013 (has links)
abstract: When people look for things in their environment they use a target template - a mental representation of the object they are attempting to locate - to guide their attention around a scene and to assess incoming visual input to determine if they have found that for which they are searching. However, unlike laboratory experiments, searchers in the real-world rarely have perfect knowledge regarding the appearance of their target. In five experiments (with nearly 1,000 participants), we examined how the precision of the observer's template affects their ability to conduct visual search. Specifically, we simulated template imprecision in two ways: First, by contaminating our searchers' templates with inaccurate features, and second, by introducing extraneous features to the template that were unhelpful. In those experiments we recorded the eye movements of our searchers in order to make inferences regarding the extent to which attentional guidance and decision-making are hindered by template imprecision. We also examined a third way in which templates may become imprecise; namely, that they may deteriorate over time. Overall, our findings support a dual-function theory of the target template, and highlight the importance of examining template precision in future research. / Dissertation/Thesis / Ph.D. Psychology 2013
70

Direct and indirect measures of learning in visual search

Reuter, Robert 11 September 2013 (has links)
In this thesis, we will explore direct and indirect measures of learning in a visual search task commonly called contextual cueing. In the first part, we present a review of the scientific literature on contextual cueing, in order to give the readers of this thesis a better general idea of existing evidence and open questions within this relatively new research field. The aims of our own experimental studies presented in the succeeding chapters are the following ones: (1) to replicate and extend the findings described in the various papers by Marvin Chun and various colleagues on contextual cueing of visual attention; (2) to explore the nature of memory representations underlying the observed learning effects, especially whether learning is actually implicit and whether memory representations are distinctive, episodic and instance-based or rather distributed, continuous and graded; (3) to extend the study of contextual cueing to more realistic visual stimuli, in order to test its robustness across various situations and validate its adaptive value in ecologically sound conditions;<p>and (4) to investigate whether such knowledge about the association between visual contexts and “meaningful” locations can be (automatically) transferred to other tasks, namely a change detection task.<p><p>In a first series of four experiments, we tried to replicate the documented contextual cueing effect using a wide range of various direct measures of learning (tasks that are supposed to be related to explicit knowledge) and we systematically varied the distinctiveness of context configurations to study its effect on both direct and indirect measures of learning. <p><p>We also ran a series of neural network simulations (briefly described in the general discussion of this thesis), based on a very simple association-learning mechanism, that not only account for the observed contextual cueing effect, but also yield rather specific predictions about future experimental data: contextual cueing effects should also be observed when repetitions of context configurations are not perfect, i.e. the networks were able to react to slightly distorted versions of repeating contexts in a similar way than they did to completely identical contexts. Human participants, we conjectured, should therefore (if the simple connectionist model captures some relevant aspects of the contextual cueing effect) become faster at detecting targets surrounded by context configurations that are only partially identical from trial to trial compared to those trials where the context configurations were randomly generated. These predictions were tested in a second series of experiments using pseudo-repeated context configurations, where some distractor items were either displaced from trial to trial or their orientation changed, while conserving their global layout. <p><p>In a third series of experiments, we used more realistic images of natural landscapes as background contexts to establish the robustness of the contextual cueing effect as well as its ecological relevance claimed by Chun and colleagues. We furthermore added a second task to these experiments to study whether the acquired knowledge about the background-target location associations would (automatically) transfer to another visual search task, namely a change detection task. If participants have learned that certain locations of the repeated images are “important”, since they contain the target item to look for, then changes occurring at those specific locations should lead to less “change blindness” than changes occurring at other irrelevant locations. We used two different types of instructions to introduce this second task after the visual search task, where we either stressed the link between the two tasks, i.e. telling them that remembering the “important” locations for each image could be used to find the changes faster, or we simply told them to perform the second task without any reference to the first one. <p><p>We will close this thesis with a general discussion, combining findings based on our review of the existing research literature and findings based on our own experimental explorations of the contextual cueing effect. By this we will discuss the implications of our empirical studies for the scientific investigation of contextual cueing and implicit learning, in terms of theoretical, empirical and methodological issues. / Doctorat en sciences psychologiques / info:eu-repo/semantics/nonPublished

Page generated in 0.0335 seconds