111 |
Characterizing and modeling visual persistence, search strategies and fixation timesAmor, Tatiana María Alonso January 2017 (has links)
AMOR, T. M. A. Characterizing and modeling visual persistence, search strategies and fixation times. 2017. 114 f. Tese (Doutorado em Física) – Centro de Ciências, Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Pós-Graduação em Física (posgrad@fisica.ufc.br) on 2017-04-05T18:55:10Z
No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5) / Rejected by Giordana Silva (giordana.nascimento@gmail.com), reason: Boa tarde Ana cleide,
Fiz algumas alterações. Só não consegui deletar o arquivo anexado a fim de renomeá-lo. Isto porque o arquivo,conforme as orientações daquele guia, deverá ter a seguimte nomenclatura: 2017_tese_tmaamor
O co-orientador é aquele que está no registro? Pergunto isso porque procurei o nome no trabalho e não localizei.
Estou concluindo o manual e já lhe envio.
on 2017-04-05T19:39:41Z (GMT) / Submitted by Pós-Graduação em Física (posgrad@fisica.ufc.br) on 2017-04-07T16:49:43Z
No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5) / Approved for entry into archive by Giordana Silva (giordana.nascimento@gmail.com) on 2017-04-07T18:13:24Z (GMT) No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5) / Made available in DSpace on 2017-04-07T18:13:24Z (GMT). No. of bitstreams: 1
11 TESE - TATIANA MARIA ALONSO AMOR.pdf: 24328367 bytes, checksum: bd1f8abe088f435a872eae56fc9eede0 (MD5)
Previous issue date: 2017 / To gather information from the world around us, we move our eyes constantly. In different
occasions we find ourselves performing visual searches, such as trying to find someone in a
crowd or a book in a shelf. While searching, our eyes “jump” from one location to another
giving rise to a wide repertoire of patterns, exhibiting distinctive persistent behaviors.
Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the
probability distributions of these measures show a clear preference of participants towards a
reading-like mechanism (geometrical persistence), whose features and potential advantages
for searching/foraging are discussed.We then perform a Multifractal Detrended Fluctuation
Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find
that it exhibits a typical multifractal behavior arising from the sequential combination
of saccades and fixations. By inspecting the time series composed of only fixational
movements, our results reveal instead a monofractal behavior with a Hurst exponent
H ∼ 0.7, which indicates the presence of long-range power-law positive correlations
(statistical persistence). Motivated by the experimental findings from the study of the
distribution of the intersaccadic angles, we developed a simple visual search model that
quantifies the wide variety of possible search strategies. From our experiments we know
that when searching a target within an image our brain can adopt different strategies. The
question then is which one does it choose? We present a simple two-parameter visual search
model (VSM) based on a persistent random walk and the experimental inter-saccadic
angle distribution. The model captures the basic observed visual search strategies that
range from systematic or reading-like to completely random. We compare the results
of the model to the experimental data by measuring the space-filling efficiency of the
searches. Within the parameter space of the model, we are able to quantify the strategies
used by different individuals for three searching tasks and show how the average search
strategy changes along these three groups. Even though participants tend to explore a vast
range of parameters, when all the items are placed on a regular lattice, participants are
more likely to perform a systematic search, whereas in a more complex field, the search
trajectories resemble a random walk. In this way we can discern with high sensitivity
the relation between the visual landscape and the average strategy, disclosing how small
variations in the image induce strategy changes. Finally, we move beyond visual search
and study the fixation time distributions across different visual tasks. Fixation times are
commonly associated to some cognitive process, as it is in this instances where most of the
visual information is gathered. However, the distribution for the fixation durations exhibits
certain similarities across a wide range of visual tasks and foveated species. We studied
how similar these distributions are, and found that, even though they share some common
properties, such as similar mean values, most of them are statistically different. Because
fixations durations can be controlled by two different mechanisms: cognitive or ocular, we
focus our research into finding a model for the fixation times distribution flexible enough
to capture the observed behaviors in experiments that tested these concepts. At the same
time, the candidate function to model the distribution needs to be the response of some
very robust inner mechanism found in all the aforementioned scenarios. Hence, we discuss
the idea of a model based on the microsacaddic inter event time statistics, resulting in the
sum of Gamma distributions, each of these related to the presence of a distinctive number
of microsaccades in a fixation. / To gather information from the world around us, we move our eyes constantly. In different
occasions we find ourselves performing visual searches, such as trying to find someone in a
crowd or a book in a shelf. While searching, our eyes “jump” from one location to another
giving rise to a wide repertoire of patterns, exhibiting distinctive persistent behaviors.
Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the
probability distributions of these measures show a clear preference of participants towards a
reading-like mechanism (geometrical persistence), whose features and potential advantages
for searching/foraging are discussed.We then perform a Multifractal Detrended Fluctuation
Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find
that it exhibits a typical multifractal behavior arising from the sequential combination
of saccades and fixations. By inspecting the time series composed of only fixational
movements, our results reveal instead a monofractal behavior with a Hurst exponent
H ∼ 0.7, which indicates the presence of long-range power-law positive correlations
(statistical persistence). Motivated by the experimental findings from the study of the
distribution of the intersaccadic angles, we developed a simple visual search model that
quantifies the wide variety of possible search strategies. From our experiments we know
that when searching a target within an image our brain can adopt different strategies. The
question then is which one does it choose? We present a simple two-parameter visual search
model (VSM) based on a persistent random walk and the experimental inter-saccadic
angle distribution. The model captures the basic observed visual search strategies that
range from systematic or reading-like to completely random. We compare the results
of the model to the experimental data by measuring the space-filling efficiency of the
searches. Within the parameter space of the model, we are able to quantify the strategies
used by different individuals for three searching tasks and show how the average search
strategy changes along these three groups. Even though participants tend to explore a vast
range of parameters, when all the items are placed on a regular lattice, participants are
more likely to perform a systematic search, whereas in a more complex field, the search
trajectories resemble a random walk. In this way we can discern with high sensitivity
the relation between the visual landscape and the average strategy, disclosing how small
variations in the image induce strategy changes. Finally, we move beyond visual search
and study the fixation time distributions across different visual tasks. Fixation times are
commonly associated to some cognitive process, as it is in this instances where most of the
visual information is gathered. However, the distribution for the fixation durations exhibits
certain similarities across a wide range of visual tasks and foveated species. We studied
how similar these distributions are, and found that, even though they share some common
properties, such as similar mean values, most of them are statistically different. Because
fixations durations can be controlled by two different mechanisms: cognitive or ocular, we
focus our research into finding a model for the fixation times distribution flexible enough
to capture the observed behaviors in experiments that tested these concepts. At the same
time, the candidate function to model the distribution needs to be the response of some
very robust inner mechanism found in all the aforementioned scenarios. Hence, we discuss
the idea of a model based on the microsacaddic inter event time statistics, resulting in the
sum of Gamma distributions, each of these related to the presence of a distinctive number
of microsaccades in a fixation.
|
112 |
Computational Models of Perceptual Space : From Simple Features to Complex ShapesPramod, R T January 2014 (has links) (PDF)
Dissimilarity plays a very important role in object recognition. But, finding perceptual dissimilarity between objects is non-trivial as it is not equivalent to the pixel dissimilarity between the objects (For example, two white noise images appear very similar even when they have different intensity values at every corresponding pixel). However, visual search allows us to reliably measure perceptual dissimilarity between a pair of objects. When the target object is dissimilar to the distracter, visual search becomes easy and it will be difficult otherwise. Even though we can measure perceptual dissimilarity between objects, we still do not understand either the underlying mechanisms or the visual features involved in the computation of dissimilarities. For this thesis, I have explored perceptual dissimilarity in two studies – by looking at known simple features and understanding how they combine, and using computational models to understand or discover complex features.
In the first study, we looked at how dissimilarity between two simple objects with known features can be predicted using dissimilarities between individual features. Specifically, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We found that multiple feature dissimilarities could be predicted as a linear combination of individual feature dissimilarities. Also, we demonstrated for the first time that Aspect ratio of the object emerges as a novel feature in visual search. This work has been published in the Journal of Vision (Pramod & Arun, 2014).
Having established in the first study that simple features combine linearly, we devised a second study to investigate dissimilarities in complex shapes. Since it is known that shape is one of the salient and complex features in object representation, we chose silhouettes of animals and abstract objects to explore the nature of dissimilarity computations. We conducted visual search using pairs of these silhouettes on humans to get an estimate of perceptual dissimilarity. We then used various computational models of shape representation (like Fourier Descriptors, Curvature Scale Space, HMAX model etc) to see how well they can predict the observed dissimilarities. We found that many of these computational models were able to predict the perceptual dissimilarities of a large number of object pairs. However, we also observed many cases where computational models failed to predict perceptual dissimilarities. The manuscript related to this study is under preparation.
|
113 |
Curiosity and motivation toward visual informationLundgren, Erik January 2018 (has links)
Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.
|
114 |
La distraction par des stimuli associés à une récompense et le contrôle attentionnel dans des tâches de recherche visuelle / Distraction by stimuli associated with reward and attentional control in visual search tasksMatias, Jérémy 16 July 2019 (has links)
Au quotidien, notre attention sélective nous permet de sélectionner les informations pertinentes au regard de notre tâche et d'ignorer celles qui ne le sont pas, afin de maintenir un comportement cohérent avec nos buts. Néanmoins, dans certaines situations, un stimulus complètement non-pertinent peut capturer notre attention contre notre volonté et, de ce fait, produire un phénomène de distraction. La distraction a initialement été considérée comme essentiellement dépendante de la saillance perceptive des distracteurs. Cependant, de récentes études ont mis en évidence que les stimuli associés à l'obtention d'une récompense (i.e., disposant d'une histoire de récompense) sont également susceptibles de produire des effets de distraction particulièrement robustes et persistants (indépendamment de leur pertinence pour la tâche en cours et de leur saillance perceptive). Parallèlement, tout un autre champ de recherche a été consacré à l’étude du contrôle attentionnel qui peut être mis en place afin de prévenir une distraction par des stimuli visuellement saillants. Cependant, à ce jour, très peu de travaux ont tenté de manipuler la qualité du contrôle attentionnel qui peut être instauré pour éviter la distraction par des stimulus associés à une récompense. L'objectif de notre travail était donc de déterminer si, et si oui, dans quelles conditions, ces distracteurs pouvaient être ignorés efficacement ou, au contraire, pouvaient résister au contrôle attentionnel. Dans sept études, nous avons associé des stimuli visuels initialement neutres à une récompense (monétaire ou sociale) afin d’étudier leur impact sur les performances lorsqu’ils apparaissaient comme distracteurs dans des tâches recherche visuelle. Nous avons manipulé la qualité du contrôle attentionnel en faisant varier les contraintes perceptives (i.e., charge perceptive : Études 1 et 2), cognitives (i.e., charge cognitive : Étude 3) ou sensorielles (i.e., dégradation sensorielle : Études 4-7) imposées par la tâche. Nous avons mis en évidence que l'interférence provoquée par un distracteur associé à une forte récompense monétaire, contrairement à celle provoquée par des distracteurs uniquement saillants, peut résister à l'augmentation de la charge perceptive (Étude 1). L'analyse des potentiels cérébraux évoqués par ces distracteurs (Etude 2) suggère que cet effet puisse résulter d’une capture attentionnelle (N2pc) accrue en charge perceptive faible et d’une suppression attentionnelle (Pd) moins efficace en charge perceptive forte pour ces distracteurs. Contrairement à nos attentes, aucun effet de la récompense n'a été observé dans l’étude manipulant la charge cognitive (Étude 3), nous conduisant à proposer que notre manipulation ait pu drainer les ressources cognitives nécessaires à l'apprentissage de l’association distracteur-récompense. Ensuite, nous avons montré que l'augmentation de la pression temporelle (Étude 4-5), réputée pour favoriser la sélection précoce d'une cible, peut au contraire, dans certaines conditions, entrainer une plus grande difficulté à ignorer les distracteurs. Pour autant, dans ces conditions, le simple fait que des distracteurs récompensés puissent apparaître semble impacter encore plus négativement la sélection d'une cible que la pression temporelle elle-même. Enfin, nos deux dernières études (Études 6-7) ont mobilisé un cadre expérimental plus écologique, impliquant la recherche de cibles dans des photographies de scènes routières prises du point de vue d’un conducteur d’automobile et l’apparition de distracteurs récompensés sur l’écran d’un smartphone présent dans l’habitacle. Nous avons mis en évidence que la dégradation sensorielle de la cible (via une augmentation de l'intensité du brouillard) entraine une distraction plus importante pour des distracteurs associés à une récompense sociale, en particulier pour les personnes présentant un niveau élevé de FoMO (Fear of Missing Out ; peur de manquer une expérience sociale). [...] / In our daily activities, selective attention allow us to select task-relevant information among irrelevant ones, in order to maintain consistent, goal-directed behavior. However, sometimes, a completely irrelevant stimulus can capture our attention against our will and, as a result, produce a distraction phenomenon. Distraction was initially considered to be essentially dependent on the perceptual salience of the distractors. Nevertheless, recent studies have shown that stimuli associated with reward outcome (i.e., with a reward history) are also likely to produce particularly robust and persistent distraction effects (regardless of their relevance to the task at hand and their perceptual salience). Alongside, a large body of works has been devoted to the study of attentional control, which could prevent distraction by perceptually salient distractors. However, to date, very little work has attempted to manipulate the quality of the attentional control that could be implemented to avoid distraction by reward history. The objective of our work was therefore to determine whether, and if so, under what conditions, reward-distractors could be ignored or, on the contrary, could resist attentional control. Seven studies were conducted with neutral visual stimuli associated with (monetary or social) reward outcome, in order to investigate how they could affect task performance when they appeared as distractors in visual search tasks. Attentional control was manipulated by varying the perceptual (i.e., perceptual load: Studies 1 and 2), cognitive (i.e., cognitive load: Study 3) or sensory (i.e., sensory degradation: Studies 4-7) demands imposed by the task. We have shown that high-reward distractor interference resists to perceptual load increase, unlike that caused by only salient distractor (Study 1). Our event-related potentials study (Study 2) suggests that this effect may be due to an enhanced attentional capture (N2pc) under low perceptual load and by a less effective attentional suppression (Pd) under high perceptual load for high-reward distractors. Next, contrary to our expectations, no effect of reward history was observed when manipulating cognitive load (Study 3), leading us to propose that our manipulation could have drained the cognitive resources necessary to learn the distractor-reward association. Then, we have shown that the increase in time pressure (Studies 4-5), known to promote the early selection of relevant targets, could also enhanced the difficulty to ignore distractors under some circumstances. Nevertheless, in these conditions, the mere fact that rewarded distractors may appear seems to increase the difficulty to ignore the distractors, more than the time pressure itself. Finally, our last two studies (Studies 6-7) mobilized a more ecological visual search task, involving pictures of driving situations taken from a driver point-of-view, in which reward distractors were displayed on the screen of a smartphone in the vehicle cabin. The sensory degradation of the target (achieved by increasing the fog density outside the car) has led to greater distraction for distractors paired with a social reward, especially for people with a high level of FoMO (Fear of Missing Out; that is, the pervasive apprehension that others might be having rewarding social experiences from which one is absent). These results are discussed in the light of the literature on distraction by reward history and attentional control, in order to integrate the reward history into these models. Moreover, our observations are discussed under the scope of applied researches that focused on driver distraction, in which our work has a particular resonance.
|
115 |
Organisation de l'espace audiovisuel tridimensionnel / Organisation of audio-visual three-dimensional spaceZannoli, Marina 28 September 2012 (has links)
Le terme stéréopsie renvoie à la sensation de profondeur qui est perçue lorsqu’une scène est vue de manière binoculaire. Le système visuel s’appuie sur les disparités horizontales entre les images projetées sur les yeux gauche et droit pour calculer une carte des différentes profondeurs présentes dans la scène visuelle. Il est communément admis que le système stéréoscopique est encapsulé et fortement contraint par les connexions neuronales qui s’étendent des aires visuelles primaires (V1/V2) aux aires intégratives des voies dorsales et ventrales (V3, cortex temporal inférieur, MT). A travers quatre projets expérimentaux, nous avons étudié comment le système visuel utilise la disparité binoculaire pour calculer la profondeur des objets. Nous avons montré que le traitement de la disparité binoculaire peut être fortement influencé par d’autres sources d’information telles que l’occlusion binoculaire ou le son. Plus précisément, nos résultats expérimentaux suggèrent que : (1) La stéréo de da Vinci est résolue par un mécanisme qui intègre des processus de stéréo classiques (double fusion), des contraintes géométriques (les objets monoculaires sont nécessairement cachés à un œil, par conséquent ils sont situés derrière le plan de l’objet caché) et des connaissances à priori (une préférence pour les faibles disparités). (2) Le traitement du mouvement en profondeur peut être influencé par une information auditive : un son temporellement corrélé avec une cible définie par le mouvement stéréo peut améliorer significativement la recherche visuelle. Les détecteurs de mouvement stéréo sont optimalement adaptés pour détecter le mouvement 3D mais peu adaptés pour traiter le mouvement 2D. (3) Grouper la disparité binoculaire avec un signal auditif dans une dimension orthogonale (hauteur tonale) peut améliorer l’acuité stéréo d’approximativement 30% / Stereopsis refers the perception of depth that arises when a scene is viewed binocularly. The visual system relies on the horizontal disparities between the images from the left and right eyes to compute a map of the different depth values present in the scene. It is usually thought that the stereoscopic system is encapsulated and highly constrained by the wiring of neurons from the primary visual areas (V1/V2) to higher integrative areas in the ventral and dorsal streams (V3, inferior temporal cortex, MT). Throughout four distinct experimental projects, we investigated how the visual system makes use of binocular disparity to compute the depth of objects. In summary, we show that the processing of binocular disparity can be substantially influenced by other types of information such as binocular occlusion or sound. In more details, our experimental results suggest that: (1) da Vinci stereopsis is solved by a mechanism that integrates classic stereoscopic processes (double fusion), geometrical constraints (monocular objects are necessarily hidden to one eye, therefore they are located behind the plane of the occluder) and prior information (a preference for small disparities). (2) The processing of motion-in-depth can be influenced by auditory information: a sound that is temporally correlated with a stereomotion defined target can substantially improve visual search. Stereomotion detectors are optimally suited to track 3D motion but poorly suited to process 2D motion. (3) Grouping binocular disparity with an orthogonal auditory signal (pitch) can increase stereoacuity by approximately 30%
|
116 |
Learning to Search for Targets : A Deep Reinforcement Learning Approach to Visual Search in Unseen Environments / Inlärd sökning efter målLundin, Oskar January 2022 (has links)
Visual search is the perceptual task of locating a target in a visual environment. Due to applications in areas like search and rescue, surveillance, and home assistance, it is of great interest to automate visual search. An autonomous system can potentially search more efficiently than a manually controlled one and has the advantages of reduced risk and cost of labor. In many environments, there is structure that can be utilized to find targets quicker. However, manually designing search algorithms that properly utilize structure to search efficiently is not trivial. Different environments may exhibit vastly different characteristics, and visual cues may be difficult to pick up. A learning system has the advantage of being applicable to any environment where there is a sufficient number of samples to learn from. In this thesis, we investigate how an agent that learns to search can be implemented with deep reinforcement learning. Our approach jointly learns control of visual attention, recognition, and localization from a set of sample search scenarios. A recurrent convolutional neural network takes an image of the visible region and the agent's position as input. Its outputs indicate whether a target is visible and control where the agent looks next. The recurrent step serves as a memory that lets the agent utilize features of the explored environment when searching. We compare two memory architectures: an LSTM, and a spatial memory that remembers structured visual information. Through experimentation in three simulated environments, we find that the spatial memory architecture achieves superior search performance. It also searches more efficiently than a set of baselines that do not utilize the appearance of the environment and achieves similar performance to that of a human searcher. Finally, the spatial memory scales to larger search spaces and is better at generalizing from a limited number of training samples.
|
117 |
Alexithymia Is Associated With Deficits in Visual Search for Emotional Faces in Clinical DepressionSuslow, Thomas, Günther, Vivien, Hensch, Tilman, Kersting, Anette, Bodenschatz, Charlott Maria 31 March 2023 (has links)
Background: The concept of alexithymia is characterized by difficulties identifying and
describing one’s emotions. Alexithymic individuals are impaired in the recognition of
others’ emotional facial expressions. Alexithymia is quite common in patients suffering
from major depressive disorder. The face-in-the-crowd task is a visual search paradigm
that assesses processing of multiple facial emotions. In the present eye-tracking study,
the relationship between alexithymia and visual processing of facial emotions was
examined in clinical depression.
Materials and Methods: Gaze behavior and manual response times of 20 alexithymic
and 19 non-alexithymic depressed patients were compared in a face-in-the-crowd task.
Alexithymia was empirically measured via the 20-item Toronto Alexithymia-Scale. Angry,
happy, and neutral facial expressions of different individuals were shown as target and
distractor stimuli. Our analyses of gaze behavior focused on latency to the target face,
number of distractor faces fixated before fixating the target, number of target fixations,
and number of distractor faces fixated after fixating the target.
Results: Alexithymic patients exhibited in general slower decision latencies compared
to non-alexithymic patients in the face-in-the-crowd task. Patient groups did not differ
in latency to target, number of target fixations, and number of distractors fixated prior
to target fixation. However, after having looked at the target, alexithymic patients fixated
more distractors than non-alexithymic patients, regardless of expression condition.
Discussion: According to our results, alexithymia goes along with impairments in
visual processing of multiple facial emotions in clinical depression. Alexithymia appears
to be associated with delayed manual reaction times and prolonged scanning after
the first target fixation in depression, but it might have no impact on the early search
phase. The observed deficits could indicate difficulties in target identification and/or
decision-making when processing multiple emotional facial expressions. Impairments
of alexithymic depressed patients in processing emotions in crowds of faces seem not
limited to a specific affective valence. In group situations, alexithymic depressed patients
might be slowed in processing interindividual differences in emotional expressions
compared with non-alexithymic depressed patients. This could represent a disadvantage
in understanding non-verbal communication in groups.
|
118 |
Applied Error Related Negativity: Single Electrode Electroencephalography in Complex Visual StimuliSawyer, Benjamin 01 January 2015 (has links)
Error related negativity (ERN) is a pronounced negative evoked response potential (ERP) that follows a known error. This neural pattern has the potential to communicate user awareness of incorrect actions within milliseconds. While the implications for human-machine interface and augmented cognition are exciting, the ERN has historically been evoked only in the laboratory using complex equipment while presenting simple visual stimuli such as letters and symbols. To effectively harness the applied potential of the ERN, detection must be accomplished in complex environments using simple, preferably single-electrode, EEG systems feasible for integration into field and workplace-ready equipment. The present project attempted to use static photographs to evoke and successfully detect the ERN in a complex visual search task: motorcycle conspicuity. Drivers regularly fail to see motorcycles, with tragic results. To reproduce the issue in the lab, static pictures of traffic were presented, either including or not including motorcycles. A standard flanker letter task replicated from a classic ERN study (Gehring et al., 1993) was run alongside, with both studies requiring a binary response. Results showed that the ERN could be clearly detected in both tasks, even when limiting data to a single electrode in the absence of artifact correction. These results support the feasibility of applied ERN detection in complex visual search in static images. Implications and opportunities will be discussed, limitations of the study explained, and future directions explored.
|
119 |
A Computational Model of the Temporal Processing Characteristics of Visual Priming in SearchHaggit, Jordan M. January 2016 (has links)
No description available.
|
120 |
Saliency processing in the human brainBogler, Carsten 01 September 2014 (has links)
Aufmerksamkeit auf visuelle Reize kann durch top-down Such- Strategien oder durch bottom-up Eigenschaften des visuellen Reizes gesteuert werden. Die Eigenschaft einer bestimmten Position, aus einer visuellen Szene heraus zu stechen, wird als Salienz bezeichnet. Es wird angenommen, dass auf neuronaler Ebene eine Salienzkarte existiert. Bis heute ist strittig, wo die Repräsentation einer solchen Karte im Gehirn lokalisiert sein könnte. Im Rahmen dieser Dissertation wurden drei Experimente durchgeführt, die verschiedene Aspekte von bottom-up Salienz-Verarbeitung mit Hilfe der funktionellen Magnetresonanztomographie untersuchten. Während die Aufmerksamkeit auf einen Fixationspunkt gerichtet war, wurde die neuronale Reaktion auf unterschiedlich saliente Stimuli in der Peripherie untersucht. In den ersten zwei Experimenten wurde die neuronale Antwort auf Orientierungskontrast und Luminanzkontrast untersucht. Die Ergebnisse deuten darauf hin, dass Salienz möglicherweise verteilt im visuellen System kodiert ist. Im dritten Experiment wurden natürliche Szenen als Stimuli verwendet. Im Einklang mit den Ergebnissen der ersten beiden Experimente wurde hier graduierte Salienz in frühen und späten visuellen Arealen identifiziert. Darüber hinaus konnten Informationen über die salientesten Positionen aus weiter anterior liegenden Arealen, wie dem anterioren intraparietalen Sulcus (aIPS) und dem frontalen Augenfeld (FAF), dekodiert werden. Zusammengenommen deuten die Ergebnisse auf eine verteilte Salienzverarbeitung von unterschiedlichen low-level Merkmalen in frühen und späten visuellen Arealen hin, die möglicherweise zu einer merkmalsunabhängigen Salienzrepräsentation im posterioren intraparetalen Sulcus zusammengefasst werden. Verschiebungen der Aufmerksamkeit zu den salientesten Positionen werden dann im aIPS und im FAF vorbereitet. Da die Probanden mit einer Fixationsaufgabe beschäftigt waren, wird die Salienz vermutlich automatisch verarbeitet. / Attention to visual stimuli can be guided by top-down search strategies or by bottom-up information. The property of a specific position to stand out in a visual scene is referred to as saliency. On the neural level, a representation of a saliency map is assumed to exist. However, to date it is still unclear where such a representation is located in the brain. This dissertation describes three experiments that investigated different aspects of bottom-up saliency processing in the human brain using functional magnetic resonance imaging (fMRI). Neural responses to different salient stimuli presented in the periphery were investigated while top-down attention was directed to the central fixation point. The first two experiments investigated the neural responses to orientation contrast and to luminance contrast. The results indicate that saliency is potentially encoded in a distributed fashion in the visual system and that a feature-independent saliency map is calculated late in the processing hierarchy. The third experiment used natural scenes as stimuli. Consistent with the results of the other two experiments, graded saliency was identified in striate and extrastriate visual cortex, in particular in posterior intraparietal sulcus (pIPS), potentially reflecting a representation of feature-independent saliency. Additionally information about the most salient positions could be decoded in more anterior brain regions, namely in anterior intraparietal sulcus (aIPS) and frontal eye fields (FEF). Taken together, the results suggest a distributed saliency processing of different low-level features in striate and extrastriate cortex that is potentially integrated to a feature-independent saliency representation in pIPS. Shifts of attention to the most salient positions are then prepared in aIPS and FEF. As participants were engaged in a fixation task, the saliency is presumably processed in an automatic manner.
|
Page generated in 0.0541 seconds