91 |
Unconscious processing of emotional facesGray, Katie L. H. January 2011 (has links)
Due to capacity limits, the brain must select important information for further processing. Evolutionary-based theories suggest that emotional (and specifically threat-relevant) information is prioritised in the competition for attention and awareness (e.g. Ohman & Mineka, 2001). A range of experimental paradigms have been used to investigate whether emotional visual stimuli (relative to neutral stimuli) are selectively processed without awareness, and attract visual attention (e.g. Yang et al., 2007). However, very few studies have used appropriate control conditions that help clarify the extent to which observed effects are driven by the extraction of emotional meaning from these stimuli, or their low-level visual characteristics (such as contrast, or luminance). The experiments in this thesis investigated whether emotional faces are granted preferential access to awareness and which properties of face stimuli drive these effects. A control stimulus was developed to help dissociate between the extraction of emotional information and low-level accounts of the data. It was shown that preferential processing of emotional information is better accounted for by low-level characteristics of the stimuli, rather than the extraction of emotional meaning per se. Additionally, a robust ‘face’ effect was found across several experiments. Investigation of this effect suggested that it may not be driven by the meaningfulness of the stimuli as it was also apparent in an individual that finds it difficult to extract information from faces. Together these findings suggest that high-level information can be extracted from visual stimuli outside of awareness, but the prioritisation afforded to emotional faces is driven by low-level characteristics. These results are particularly timely given continued high-profile debate surrounding the origins of emotion prioritisation (e.g. Tamettio & de Gelder, 2010; Pessoa & Adolphs, 2010).
|
92 |
Adaptation to multiple radial optic flowsDassy, Brice January 2015 (has links)
There is long-standing evidence suggesting that our visual system can adapt to new visual environments, like a single radial optic flow generated when driving (Brown, 1931; Denton, 1966). In fact, as we move through the environment multiple optic flows can be generated. For example, when driving, we are often exposed to more than one radial optic flow at the same time. In this thesis I investigate whether the visual system can simultaneously adapt to two radial motion optic flows. More specifically, I explored this issue in three ways. First, I investigated whether the visual system could – through a fast low-level process – adapt to two optic flows present at two specific locations in space. Second, I probed whether the visual system could – through a perceptual learning process – learn to associate two radial optic flows with their locations in space. Third, I examined whether the visual system could – through a perceptual learning process – learn to associate each of two radial optic flows with preceding eye-movements. With regard to the first issue, the results from Experiments 1 – 6 suggested following exposure to two radial motion stimuli, a fast low-level process in the visual system could adapt to a radial flow pattern at one location in space: the radial flow pattern generated by the most recently presented radial motion stimulus. With respect to the second issue, the results from Experiments 7 – 10 indicated that the visual system could not learn to associate specific locations with two different radial motion stimuli. Finally, regarding the third issue, the results from Experiment 11 suggest that the visual system can associate specific eye-movements with two different radial motion stimuli. Taken together, these results suggest constraints on the way in which the visual system can adapt to radial motion, and emphasize the importance of self-movement in generating adaption to new visual environments.
|
93 |
Cultural differences in scene perceptionAlotaibi, Albandari January 2016 (has links)
Do individuals from different cultures perceive scenes differently? Does culture have an influence on visual attention processes? This thesis investigates not only what these influences are, and how they affect eye movements, but also examines some of the proposed mechanisms that underlie the cultural influence in scene perception. Experiments 1 & 2 showed that Saudi participants directed a higher number of fixations to the background of images, in comparison to the British participants. British participants were also more affected by background changes, an indication of their tendency to bind the focal objects to their contexts. Experiments 3 & 4 revealed a higher overall number of fixations for Saudi participants, along with longer search times. The intra-group comparisons of scanpaths for Saudi participants revealed less similarity than within the British group, demonstrating a greater heterogeneity of search behaviour within the Saudi group. These findings could indicate that the British participants have the advantage of being more able to direct attention towards the goals of the task. The mechanisms that have been proposed for cultural differences in visual attention are due to particular thinking styles that emerge from the prevailing culture: analytic thinking (common in individualistic cultures) promotes attention to detail and a focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. Priming methodology was used in Experiments 5, 6 & 7 to cue these factors, although it did not reveal any significant effects on eye movement behaviours or on accuracy at recognition of objects. By testing these explanations directly (Experiment 8), findings have mainly suggested the holistic-analytic dimension is one of the main mechanisms underlying cultural diversity in scene perception. Taken together, these experiments conclude that the allocation of visual attention is also influenced by an individual’s culture.
|
94 |
Some aspects of visual discomfortO'Hare, Louise January 2013 (has links)
Visual discomfort is the adverse sensations, such as headaches and eyestrain, encountered on viewing certain stimuli. These sensations can arise under certain viewing conditions, such as stereoscopic viewing and prolonged reading of text patterns. Also, discomfort can occur as a result of viewing stimuli with certain spatial properties, including stripes and filtered noise patterns of particular spatial frequency. This thesis is an exploration of the stimulus properties causing discomfort, within the framework of two theoretical explanations. Both of the explanations relate to the stimuli being difficult for the visual system to process. The first is concerned with discomfort being the result of inefficient neural processing. Neural activity requires energy to process information, and stimuli that demand a lot of energy to be processed might be uncomfortable. The second explanation revolves around uncomfortable stimuli not being effective in driving the accommodative (focussing) response. Accommodation relies on the stimulus as a cue to drive the response effectively - an uninformative cue might result in discomfort from an uncertain accommodative response. The following research investigates both these possibilities using a combination of psychophysical experimentation, questionnaire-based surveys on non-clinical populations, and computational modelling. The implications of the work for clinical populations are also discussed.
|
95 |
Seeing and visual experience : a consideration of arguments and examples used to support the view that seeing consists in, or involves, the having of visual experiencesGrant, L. B. January 1967 (has links)
No description available.
|
96 |
Predictive feedback to the primary visual cortex during saccadesEdwards, Grace January 2014 (has links)
Perception of our sensory environment is actively constructed from sensory input and prior expectations. These expectations are created from knowledge of the world through semantic memories, spatial and temporal contexts, and learning. Multiple frameworks have been created to conceptualise this active perception, these frameworks will be further referred to as inference models. There are three elements of inference models which have prevailed in these frameworks. Firstly, the presence of internal generative models for the visual environment, secondly feedback connections which project prediction signals of the model to lower cortical processing areas to interact with sensory input, and thirdly prediction errors which are produced when the sensory input is not predicted by feedback signals. The prediction errors are thought to be fed-forward to update the generative models. These elements enable hypothesis driven testing of active perception. In vision, error signals have been found in the primary visual cortex (V1). V1 is organised retinotopically; the structure of sensory stimulus that enters through the retina is retained within V1. A semblance of that structure exists in feedback predictive signals and error signal production. The feedback predictions interact with the retinotopically specific sensory input which can result in error signal production within that region. Due to the nature of vision, we rapidly sample our visual environment using ballistic eye-movements called saccades. Therefore, input to V1 is updated about three times per second. One assumption of active perception frameworks is that predictive signals can update to new retinotopic locations of V1 with sensory input. This thesis investigates the ability of active perception to redirect predictive signals to new retinotopic locations with saccades. The aim of the thesis is to provide evidence of the relevance of generative models in a more naturalistic viewing paradigm (i.e. across saccades). An introduction into active visual perception is provided in Chapter 1. Structural connections and functional feedback to V1 are described at a global level and at the level of cortical layers. The role of feedback connections to V1 is then discussed in the light of current models, which hones in on inference models of perception. The elements of inferential models are introduced including internal generative models, predictive feedback, and error signal production. The assumption of predictive feedback relocation in V1 with saccades is highlighted alongside the effects of saccades within the early visual system, which leads to the motivation and introduction of the research chapters. A psychophysical study is presented in Chapter 2 which provides evidence for the transference of predictive signals across saccades. An internal model of spatiotemporal motion was created using an illusion of motion. The perception of illusory motion signifies the engagement of an internal model as a moving token is internally constructed from the sensory input. The model was tested by presenting in-time (predictable) and out-of-time (unpredictable) targets on the trace of perceived motion. Saccades were initiated across the illusion every three seconds to cause a relocation of predictive feedback. Predictable in-time targets were better detected than the unpredictable out-of-time targets. Importantly, the detection advantage for in-time targets was found 50 – 100 ms after saccade indicating transference of predictive signals across saccade. Evidence for the transfer of spatiotemporally predictive feedback across saccade was supported by the fMRI study presented in Chapter 3. Previous studies have demonstrated an increased activity when processing unpredicted visual stimulation in V1. This activity increase has been related to error signal production as the input was not predicted via feedback signals. In Chapter 3, the motion illusion paradigm used in Chapter 2 was redesigned to be compatible with brain activation analysis. The internal model of motion was created prior to saccade and tested at a post-saccadic retinotopic region of V1. An increased activation was found for spatiotemporally unpredictable stimuli directly after eye-movement, indicating the predictive feedback was projected to the new retinotopic region with saccade. An fMRI experiment was conducted in Chapter 4 to demonstrate that predictive feedback relocation was not limited to motion processing in the dorsal stream. This was achieved by using natural scene images which are known to incorporate ventral stream processing. Multivariate analysis was performed to determine if feedback signals pertaining to natural scenes could relocate to new retinotopic eye-movements with saccade. The predictive characteristic of feedback was also tested by changing the image content across eye-movements to determine if an error signal was produced due to the unexpected post-saccadic sensory input. Predictive feedback was found to interact with the images presented post-saccade, indicating that feedback relocated with saccade. The predictive feedback was thought to contain contextual information related to the image processed prior to saccade. These three chapters provide evidence for inference models contributing to visual perception during more naturalistic viewing conditions (i.e. across saccades). These findings are summarised in Chapter 5 in relation to inference model frameworks, transsacadic perception, and attention. The discussion focuses on the interaction of internal generative models and trans-saccadic perception in the aim of highlighting several consistencies between the two cognitive processes.
|
97 |
Psychophysical studies of interactions between luminance and chromatic information in human visionClery, Stéphane January 2014 (has links)
In this thesis, I investigated how human vision processes colour and luminance information to enable perception of our environment. I first tested how colour can alter the perception of depth from shading. A luminance variation can be interpreted as either variation of reflectance (patterning) or variation of shape. The process of shape-from-shading interprets luminance variation as changes in the shape of the object (e.g. the shading on an object might elicit the perception of curvature). The addition of colour variation is known to modify this shape-from-shading processing. In the experiments presented here I tested how luminance driven percepts can be modified by colour. My first series of experiments confirmed that depth is modulated by colour. I explored a larger number of participants than previously tested. Contrary to previous studies, a wide repertoire of behaviour was found; participants experienced variously more depth, or less depth, or no difference. I hypothesised that the colour modulation effect might be due to a low-level contrast modulation of luminance by colour, rather than a higher-level depth effect. In a second series of experiments, I therefore tested how the perceived contrast of a luminance target can be affected by the presence of an orthogonal mask. I found that colour had a range of effects on the perception of luminance, again dependant on the participants. Luminance also had a similar wide range of effects on the perceived contrast of luminance targets. This showed that, at supra-threshold levels, a luminance target's contrast can be modulated by a component of another orientation (colour or luminance defined). The effects of luminance and colour were not following a particular rule. In a third series of experiments, I explored this interaction at detection levels of contrast. I showed cross-interaction between luminance target and mask but no effects of a colour mask.
|
98 |
Action and rehabilitation in hemispatial neglectRossit, Stephanie January 2009 (has links)
Milner and Goodale (1995, 2006) propose a model of vision that makes a distinction between ‘vision for perception’ and ‘vision for action’. Regarding hemispatial neglect, they, somewhat contentiously, hypothesize that this disorder is better explained by damage to a high-level representational structure that receives input from the ventral visual stream, but not from the dorsal-stream. Consequently, they postulate that neglect patients should code spatial parameters for action veridically. Another strong claim of the model is that the dorsal stream’s control of action is designed for dealing with target stimuli in the ‘here and now’, yet when time is allowed to pass and a reaction has to be made on the basis of a visual memory, the ventral stream is required for successful performance. One prediction from this is that neglect patients should be able to perform immediate actions, but should present specific impairments when the action is delayed. In Part I of this thesis the pattern of spared and impaired visuomotor abilities in patients with neglect, as specifically predicted by the perception and action model (Milner & Goodale, 1995, 2006), was investigated. In Chapter 1, the performance of patients with and without neglect after right hemisphere stroke was compared with that of age-matched controls. Participants were asked to point either directly towards targets or halfway between two stimuli (gap bisection), both with and without visual feedback during movement. No neglect-specific impairment was found in timing, accuracy or reach trajectory measures in either pointing or gap bisection. In Chapter 2, I tested whether neglect patients would be unimpaired in immediate pointing, yet show inaccurate pointing in a condition where a delay is interposed between the presentation of the stimulus and the response signal. Similarly to Chapter 1, it was found that neglect patients showed no accuracy impairments when asked to perform an immediate action. Conversely, when pointing towards remembered leftward locations they presented specific accuracy deficits that correlated with neglect severity. Moreover, an initial voxel-based lesion-symptom analysis further revealed that these deficits were associated with damage to occipito-temporal areas which were also mostly damaged in the neglect group. Furthermore, training of grasping the centre of rods (visuomotor feedback training) has been shown to improve neglect (Robertson, Nico & Hood, 1997; Harvey et al., 2003). It is postulated that by using the spared visuomotor abilities in these patients it is possible to ‘bootstrap’ their perceptual deficits through some ‘dorsal-to-ventral recalibration’. Hence, in Part II the immediate and long-term effects of visuomotor feedback training were explored on neglect conventional measures, as well as in daily life tasks. I found that this technique improves neglect symptoms and crucially that these improvements were long lasting, as they were present even after 4-months post-training. Importantly, I also show that the training effects generalize to the patient’s daily lives at follow-up. These findings are very encouraging for the rehabilitation of neglect as this condition has been shown to be the best single predictor of poor recovery after stroke and very difficult to treat.
|
99 |
Ensemble perception of hueMaule, John January 2016 (has links)
In order to rapidly get the gist of new scenes or recognise objects, the brain must have mechanisms to process the large amount of visual information which enters the eye. Previous research has shown that observers tend to extract the average feature from briefly seen sets of multiple stimuli that vary along a dimension (e.g., size), a phenomenon called ensemble perception. This thesis investigates ensemble perception of hue. Paper 1 (Maule, Witzel & Franklin, 2014) demonstrates that human observers have memories biased towards the mean hue of a rapidly-presented ensemble of colours. Paper 2 (Maule & Franklin, 2015) further shows that observers are able to identify the mean hue from a distractor fairly reliably, provided the range of hues is manageable. Paper 3 provides evidence that, while observers' settings of the mean hue converge quite closely on the true mean across many trials, the precision of those settings is low and does not support claims that ensemble perception can surpass the limits of visual working memory. Paper 4 found that adults with autism have an enhanced ability to discriminate members from non-members of multi-hue ensembles, and a similar ability to extract the mean hue compared to typical adults, but are worse at averaging small sets. Finally, paper 5 investigated colour afterimages in adults with autism and whether they are affected by top-down gist of a scene. It was found that afterimages were no different in autism compared to a typical group. Overall these studies provide the first comprehensive exploration of ensemble perception of hue, showing that observers can extract and estimate the mean hue of a rapidly-presented multi-colour ensemble with a small hue variance. The ability to average hue may be driven by a sub-sampling mechanism, but results from autistic adults suggests that it can be modulated by processing style.
|
100 |
Towards a better understanding of sensory substitution : the theory and practice of developing visual-to-auditory sensory substitution devicesWright, Thomas D. January 2014 (has links)
Visual impairment is a global and potentially devastating affliction. Sensory substitution devices have the potential to lessen the impact of blindness by presenting vision via another modality. The chief motivation behind each of the chapters that follow is the production of more useful sensory substitution devices. The first empirical chapter (chapter two) demonstrates the use of interactive genetic algorithms to determine an optimal set of parameters for a sensory substitution device based on an auditory encoding of vision (“the vOICe”). In doing so, it introduces the first version of a novel sensory substitution device which is configurable at run-time. It also presents data from three interactive genetic algorithm based experiments that use this new sensory substitution device. Chapter three radically expands on this theme by introducing a general purpose, modular framework for developing visual-to-auditory sensory substitution devices (“Polyglot”). This framework is the fuller realisation of the Polyglot device introduced in the first chapter and is based on the principle of End-User Development (EUD). In chapter four, a novel method of evaluating sensory substitution devices using eye-tracking is introduced. The data shows both that the copresentation of visual stimuli assists localisation and that gaze predicted an auditory target location more reliably than the behavioural responses. Chapter five explores the relationship between sensory substitution devices and other tools that are used to acquire real-time sensory information (“sensory tools”). This taxonomy unites a range of technology from telescopes and cochlear implants to attempts to create a magnetic sense that can guide further research. Finally, in chapter six, the possibility of representing colour through sound is explored. The existence of a crossmodal correspondence between (equi-luminant) hue and pitch is documented that may reflect a relationship between pitch and the geometry of visible colour space.
|
Page generated in 0.0394 seconds