Spelling suggestions: "subject:"cisual search"" "subject:"4visual search""
11 |
The combined effects of sensory and non-sensory variables on saccade selection processes in visual searchWarfe, Michael 06 August 2010 (has links)
Decisions are based on multiple sources of information. For example, sensory information encoding environmental features may be combined with prior experience to bias judgements in visual behaviour. With the goal of characterizing the rules by which sensory and non-sensory variables combine to direct saccade selection processes monkeys were trained in a visual search task where the discriminability of a visual target and reward outcome for correct foveation varied systematically. Target discriminability was manipulated across three levels of luminance contrast while reward was manipulated by 'tagging' a spatial location such that target foveation at the tagged location yielded one, two or four times the liquid reward available at all other locations. The location and discriminability of the search target amongst seven distractor stimuli varied randomly from trial-to-trial while the magnitude of reward at the tagged location was fixed for each experimental block.
Reward was found to have a large effect on search behaviour when target discriminability was low, but as discriminability increased, the effect of reward diminished. More specifically, reward increased choice probability and reduced the latency of saccades to target and distractor stimuli appearing in the tagged location. Together, the results suggested the effects of reward and luminance on saccade selection were dependent on one another.
To characterize the nature of this interaction search psychophysics were couched in saccade selection processes using signal detection theory. Signals carrying target and distractor related information were modelled and taken to capture an actual discrimination process implemented by the brain. It was found that a response bias in saccade selection processes could largely reproduce monkey choice behaviour for both correct and incorrect trials. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2010-08-03 22:22:15.825
|
12 |
Episodically Defined Categories in the Organization of Visual MemoryAntonelli, Karla B 13 December 2014 (has links)
Research into the nature and content of visual long-term memory has investigated what aspects of its representation may account for the remarkable ability we have to remember large amounts of detailed visual information. One theory proposed is that visual memories are supported by an underlying structure of conceptual knowledge around which visual information is organized. However, findings in memory for visual information learned in a visual search task were not explained by this theory of conceptual support, and a new theory is proposed that incorporates the importance of episodic, task-relevant visual information into the organizational structure of visual memory. The current study examined visual long-term memory organization as evidenced by retroactive interference effects in memory for objects learned in a visual search. Four experiments were conducted to examine the amount of retroactive interference induced based on aspects in which interfering objects were related to learned objects. Specifically, episodically task-relevant information about objects was manipulated between conditions based on search instructions. Aspects of conceptual category, perceptual information (color), and context (object role in search) were examined for their contribution to retroactive interference for learned objects. Findings indicated that when made episodically task-relevant, perceptual, as well as conceptual, information contributed to the organization of visual long-term memory. However, when made episodically non-relevant, perceptual information did not contribute to memory organization, and memory defaulted to conceptual category organization. This finding supports the theory of an episodically defined organizational structure in visual long-term memory that is overlaid upon an underlying conceptual structure.
|
13 |
ON THE CONTRIBUTION OF TOP-DOWN PREPARATION TO LEARNED CONTROL OVER SELECTION IN SINGLETON SEARCH / TOP-DOWN PREPARATION IN SINGLETON SEARCHSclodnick, Benjamin January 2024 (has links)
Physically salient stimuli in the visual field tend to capture attention rapidly and automatically, leading to the perceived pop-out effect in visual search. There is much debate about if and how top-down preparatory processes influence visual attention when salient stimuli are present. Experience with a task involves learning at multiple levels of cognitive processing, and it can be difficult to distinguish these learning effects from the effect of a ‘one-shot’ act of top-down preparation on a given trial. That is, preparing to attend to a particular colour might influence search on a given trial, but that act of preparation may also become embedded in a memory representation that carries over to influence future search events. Moreover, such learning effects may accumulate with repeated experiences of preparing in a particular way. The goal of the present thesis was to examine specifically how preparation at one point in time affects pop-out search at a later point in time. To this end, I present the following empirical contributions: I introduce a novel method for studying preparation effects in search for a salient singleton target; I use this new method to explore the contribution of learning and memory to effects of preparation on singleton search, and outline a number of boundary conditions of this new method; and I distinguish between two components of the reported preparatory effects, one related to preparing to attend to a particular feature, and one related to preparing to ignore a particular feature. Together, these contributions highlight the contribution of top-down preparation to memory representations that guide attention in singleton search, and offer a novel method that researchers can use to ask unanswered questions about the roles of preparation and experience in singleton search. / Thesis / Doctor of Philosophy (PhD) / Imagine looking out over a farmer’s field. All you can see is green grass, except for a big red tractor parked off in the distance. In this scenario, the contrast of the tractor’s colour and shape against the uniform grass will tend to draw attention to the tractor, making it immediately noticeable. This pop-out effect is often thought to be driven solely by physical stimulus features. However, past experiences searching through visual scenes can also affect the degree to which salient objects pop-out, suggesting that pop-out is influenced by memory. This thesis is centered around the memory processes that influence visual search for pop-out targets. I focus specifically on how deliberate preparation for particular search targets at one moment in time can lead to learning that influences pop-out search at later moments.
|
14 |
Auditory target identification in a visual search taskLochner, Martin Jewell January 2005 (has links)
Previous research has shown that simultaneous auditory identification of the target in a visual search task can lead to more efficient (i. e. ?flatter?) search functions (Spivey et al. , 2001). Experiment 1 replicates the paradigm of Spivey et al. , providing subjects with auditory identification of the search target either before (<em>Consecutive</em> condition) or simultaneously with (<em>Concurrent</em> condition) the onset of the search task. RT x Set Size slopes in the <em>Concurrent</em> condition are approximately 1/2 as steep as those in the <em>Consecutive</em> condition. Experiment 2 employs a distractor ratio manipulation to test the notion that subjects are using the simultaneous auditory target identification to ?parse? the search set by colour, thus reducing the search set by 1/2. The results of Experiment 2 do not support the notion that subjects are parsing the search set by colour. Experiment 3 addresses the same question as Experiment 2, but obtains the desired distractor ratios by holding the amount of relevantly-coloured items constant while letting overall set size vary. Unlike Experiment 2, Experiment 3 supports the interpretation that subjects are using the auditory target identification to parse the search set.
|
15 |
Collaboration During Visual SearchMalcolmson, Kelly January 2006 (has links)
Three experiments examine how collaboration influences visual search performance. Working with a partner or on their own, participants reported whether a target was present or absent in briefly presented search displays. The search performance of individuals working together (collaborative pairs) was compared to the pooled responses of the individuals working alone (nominal pairs). Collaborative pairs were less likely than nominal pairs to correctly detect a target and they were less likely to make false alarms. Signal detection analyses revealed that collaborative pairs were more sensitive to the presence of the target and had a more conservative response bias than the nominal pairs. This pattern was observed when the search difficulty was increased and when the presence of another individual was matched across pairs. The results are discussed in the context of task sharing, social loafing and current theories of visual search.
|
16 |
Grabbing Your Attention: The Impact of Finding a First Target in Multiple-Target SearchAdamo, Stephen Hunter January 2016 (has links)
<p>For over 50 years, the Satisfaction of Search effect, and more recently known as the Subsequent Search Miss (SSM) effect, has plagued the field of radiology. Defined as a decrease in additional target accuracy after detecting a prior target in a visual search, SSM errors are known to underlie both real-world search errors (e.g., a radiologist is more likely to miss a tumor if a different tumor was previously detected) and more simplified, lab-based search errors (e.g., an observer is more likely to miss a target ‘T’ if a different target ‘T’ was previously detected). Unfortunately, little was known about this phenomenon’s cognitive underpinnings and SSM errors have proven difficult to eliminate. However, more recently, experimental research has provided evidence for three different theories of SSM errors: the Satisfaction account, the Perceptual Set account, and the Resource Depletion account. A series of studies examined performance in a multiple-target visual search and aimed to provide support for the Resource Depletion account—a first target consumes cognitive resources leaving less available to process additional targets. </p><p>To assess a potential mechanism underlying SSM errors, eye movements were recorded in a multiple-target visual search and were used to explore whether a first target may result in an immediate decrease in second-target accuracy, which is known as an attentional blink. To determine whether other known attentional distractions amplified the effects of finding a first target has on second-target detection, distractors within the immediate vicinity of the targets (i.e., clutter) were measured and compared to accuracy for a second target. To better understand which characteristics of attention were impacted by detecting a first target, individual differences within four characteristics of attention were compared to second-target misses in a multiple-target visual search. </p><p>The results demonstrated that an attentional blink underlies SSM errors with a decrease in second-target accuracy from 135ms-405ms after detection or re-fixating a first target. The effects of clutter were exacerbated after finding a first target causing a greater decrease in second-target accuracy as clutter increased around a second-target. The attentional characteristics of modulation and vigilance were correlated with second- target misses and suggest that worse attentional modulation and vigilance are predictive of more second-target misses. Taken together, these result are used as the foundation to support a new theory of SSM errors, the Flux Capacitor theory. The Flux Capacitor theory predicts that once a target is found, it is maintained as an attentional template in working memory, which consumes attentional resources that could otherwise be used to detect additional targets. This theory not only proposes why attentional resources are consumed by a first target, but encompasses the research in support of all three SSM theories in an effort to establish a grand, unified theory of SSM errors.</p> / Dissertation
|
17 |
Spatial and Temporal Learning in Robotic Pick-and-Place Domains via Demonstrations and ObservationsToris, Russell C 20 April 2016 (has links)
Traditional methods for Learning from Demonstration require users to train the robot through the entire process, or to provide feedback throughout a given task. These previous methods have proved to be successful in a selection of robotic domains; however, many are limited by the ability of the user to effectively demonstrate the task. In many cases, noisy demonstrations or a failure to understand the underlying model prevent these methods from working with a wider range of non-expert users. My insight is that in many mobile pick-and-place domains, teaching is done at a too fine grained level. In many such tasks, users are solely concerned with the end goal. This implies that the complexity and time associated with training and teaching robots through the entirety of the task is unnecessary. The robotic agent needs to know (1) a probable search location to retrieve the task's objects and (2) how to arrange the items to complete the task. This thesis work develops new techniques for obtaining such data from high-level spatial and temporal observations and demonstrations which can later be applied in new, unseen environments. This thesis makes the following contributions: (1) This work is built on a crowd robotics platform and, as such, we contribute the development of efficient data streaming techniques to further these capabilities. By doing so, users can more easily interact with robots on a number of platforms. (2) The presentation of new algorithms that can learn pick-and-place tasks from a large corpus of goal templates. My work contributes algorithms that produce a metric which ranks the appropriate frame of reference for each item based solely on spatial demonstrations. (3) An algorithm which can enhance the above templates with ordering constraints using coarse and noisy temporal information. Such a method eliminates the need for a user to explicitly specify such constraints and searches for an optimal ordering and placement of items. (4) A novel algorithm which is able to learn probable search locations of objects based solely on sparsely made temporal observations. For this, we introduce persistence models of objects customized to a user's environment.
|
18 |
Effect of video based road commentary training on the hazard perception skills of teenage novice driversWilliamson, Amy Rose January 2008 (has links)
Recent evidence in the road safety research literature indicates that skills in hazard perception, visual search and attention may be developing executive functions in young novice drivers before the age of 25 years, contributing to their unintentional risk taking behaviour and subsequent high crash rates. The present research aimed to investigate these skills, whether they are predictive of each other, and whether hazard perception can be improved through road commentary training. Twenty-two young novice drivers and eight experienced drivers were recruited as participants in this study. The experienced drivers performed significantly better than the novice drivers on the hazard detection task that was specifically designed for the study. Their visual search skills were also examined and compared using the Visual Search and Attention Test, with the experienced drivers performing significantly better than the novice drivers. Interestingly, a significant positive correlation was found between the scores of the participants on the hazard detection task and the Visual Search and Attention Test which may indicate that the hazard detection skills can be predicted. The novice driver group who received 12 trials of video based road commentary training significantly improved in their hazard detection skills, suggesting that video based road commentary could be an effective road safety intervention for young novice drivers and if developed into a more comprehensive programme, holds promise for future implementation into the New Zealand Graduated Driver Licensing System. The results also hold promise for future investigation into the use of the Visual Search and Attention Test as a predictor of hazard perception skills in novice drivers.
|
19 |
Using Visual Change Detection to Examine the Functional Architecture of Visual Short-Term MemoryAlexander Burmester Unknown Date (has links)
A common problem in vision research is explaining how humans perceive a coherent, detailed and stable world despite the fact that the eyes make constant, jumpy movements and the fact that only a small part of the visual field can be resolved in detail at any one time. This is essentially a problem of integration over time - how successive views of the visual world can be used to create the impression of a continuous and stable environment. A common way of studying this problem is to use complete visual scenes as stimuli and present a changed scene after a disruption such as an eye movement or a blank screen. It is found in these studies that observers have great difficulty detecting changes made during a disruption, even though these changes are immediately and easily detectable when the disruption is removed. These results have highlighted the importance of motion cues in tracking changes to the environment, but also reveal the limited nature of the internal representation. Change blindness studies are interesting as demonstrations but can be difficult to interpret as they are usually applied to complex, naturalistic scenes. More traditional studies of scene analysis, such as visual search, are more abstract in their formulation, but offer more controlled stimulus conditions. In a typical visual search task, observers are presented with an array of objects against a uniform background and are required to report on the presence or absence of a target object that is differentiable from the other objects in some way. More recently, scene analysis has been investigated by combining change blindness and visual search in the `visual search for change' paradigm, in which observers must search for a target object defined by a change over two presentations of the set of objects. The experiments of this thesis investigate change blindness using the visual search for change paradigm, but also use principles of design from psychophysical experiments, dealing with detection and discrimination of basic visual qualities such as colour, speed, size, orientation and spatial frequency. This allows the experiments to precisely examine the role of these different features in the change blindness process. More specifically, the experiments are designed to look at the capacity of visual short-term memory for different visual features, by examining the retention of this information across the temporal gaps in the change blindness experiments. The nature and fidelity of representations in visual short-term memory is also investigated by manipulating (i) the manner in which featural information is distributed across space and objects, (ii) the time for which the information is available, (iii) the manner in which observers must respond to that information. Results point to a model in which humans analyse objects in a scene at the level of features/attributes rather than at a pictorial/object level. Results also point to the fact that the working representations which humans retain during visual exploration are similarly feature- rather than object-based. In conclusion the thesis proposes a model of scene analysis in which attention and vSTM capacity limits are used to explain the results from a more information theoretic standpoint.
|
20 |
Vision, Instruction, and ActionChapman, David 01 April 1990 (has links)
This thesis describes Sonja, a system which uses instructions in the course of visually-guided activity. The thesis explores an integration of research in vision, activity, and natural language pragmatics. Sonja's visual system demonstrates the use of several intermediate visual processes, particularly visual search and routines, previously proposed on psychophysical grounds. The computations Sonja performs are compatible with the constraints imposed by neuroscientifically plausible hardware. Although Sonja can operate autonomously, it can also make flexible use of instructions provided by a human advisor. The system grounds its understanding of these instructions in perception and action.
|
Page generated in 0.0449 seconds