81 |
Cue combination of colour and luminance in edge detectionSharman, Rebecca J. January 2014 (has links)
Much is known about visual processing of chromatic and luminance information. However, less is known about how these two signals are combined. This thesis has three aims to investigate how colour and luminance are combined in edge detection. 1) To determine whether presenting colour and luminance information together improves performance in tasks such as edge localisation and blur detection. 2) To investigate how the visual system resolves conflicts between colour and luminance edge information. 3) To explore whether colour and luminance edge information is always combined in the same way. It is well known that the perception of chromatic blur can be constrained by sharp luminance information in natural scenes. The first set of experiments (Chapter 3) quantifies this effect and demonstrates that it cannot be explained by poorer acuity in processing chromatic information, higher contrast of luminance information or differences in the statistical structure of colour and luminance information in natural scenes. It is therefore proposed that there is a neural mechanism that actively promotes luminance information. Chapter 4 and Experiments 5.1 and 5.3 aimed to investigate whether the presence of both chromatic and luminance information improves edge localisation performance. Participant performance in a Vernier acuity (alignment) task was compared to predictions from three models; ‘winner takes all’, unweighted averaging and maximum likelihood estimation (a form of weighted averaging). Despite several attempts to differentiate the models we failed to increase the differences in model predictions sufficiently and it was not possible to determine whether edge localisation was enhanced by the presence of both cues. In Experiment 5.4 we investigated how edges are localised when colour and luminance cues conflict, using the method of adjustment. Maximum likelihood estimation was used to make predictions based on measurements of each cue in isolation. These predictions were then compared to observed data. It was found that, whilst maximum likelihood estimation captured the pattern of the data, it consistently over-estimated the weight of the luminance component. It is suggested that chromatic information may be weighted more heavily than predicted as it is more useful for detecting object boundaries in natural scenes. In Chapter 6 a novel approach, perturbation discrimination, was used to investigate how the spatial arrangement of chromatic and luminance cues, and the type of chromatic and luminance information, can affect cue combination. Perturbation discrimination requires participants to select the grating stimulus that contains spatial perturbation. If one cue dominated over the other it was expected that this would be reflected by masking and increased perturbation detection thresholds. We compared perturbation thresholds for chromatic and luminance defined line and square-wave gratings in isolation and when presented with a mask of the other channel and other grating type. For example, the perturbation threshold for a luminance line target alone was compared to the threshold for a luminance line target presented with a chromatic square-wave target. The introduction of line masks caused masking for both combinations. Introduction of an achromatic square-wave mask had no effect on perturbation thresholds for chromatic line targets. However, the introduction of a chromatic square-wave mask to luminance line targets improved perturbation discrimination performance. This suggests that the perceived location of the chromatic edges is determined by the location of the luminance lines. Finally, in Chapter 7, we investigated whether chromatic blur is constrained by luminance information in bipartite edges. Earlier in the thesis we demonstrated that luminance information constrains chromatic blur in natural scenes, but also that chromatic information has more influence than expected when colour and luminance edges conflict. This difference may be due to differences in the stimuli or due to differences in the task. The luminance masking effect found using natural scenes was replicated using bipartite edges. Therefore, the finding that luminance constrains chromatic blur is not limited to natural scene stimuli. This suggests that colour and luminance are combined differently for blur discrimination tasks and edge localisation tasks. Overall we can see that luminance often dominates in edge perception tasks. For blur discrimination this seems to be because the mechanisms differ. For edge localisation it might be simply that luminance cues are often higher contrast and, when this is equated, chromatic cues are actually a good indicator of edge location.
|
82 |
Spatial and temporal factors affecting human visual recognition memoryRobertson, Daniel January 2007 (has links)
The current thesis investigated the effects of a variety of spatial and temporal factors on visual recognition memory in human adults. Continuous recognition experiments investigated the effect of lag (the number of items intervening between study and test) on recognition of a variety of stimulus sets (common objects, face-like stimuli, fractals, trigrams), and determined that recognition of common objects was superior to that of other stimulus types. This advantage was largely eradicated when common objects of only one class (birds) were tested. Continuous recognition confounds the number of intervening items with the time elapsed between study and test presentations of stimuli. These factors were separated in an experiment comparing recognition performance at different rates of presentation. D-prime scores were affected solely by the number of intervening items, suggesting an interference-based explanation for the effect of lag. The role of interference was investigated further in a subsequent experiment examining the effect of interitem similarity on recognition. A higher level of global similarity amongst stimuli was associated with a lower sensitivity of recognition. Spatial separation between study and test was studied using same/different recognition of face-like stimuli, and spatial shifts between study and test locations. An initial study found a recognition advantage for stimuli that were studied and tested in the same peripheral location. However, the introduction of eye-tracking apparatus to verify fixation resulted in the eradication of this effect, suggesting that it was an artefact of uncontrolled fixation. Translation of both face-like and fractal stimuli between areas of different eccentricity, with different spatial acuities, did decrease recognition sensitivity, suggesting a partial positional specificity of visual memory. These phenomena were unaffected by 180 degree rotation. When interfering stimuli were introduced between study and test trials, translation invariance at a constant eccentricity broke down.
|
83 |
Emotion word processing : evidence from electrophysiology, eye movements and decision makingScott, Graham G. January 2009 (has links)
A degree of confusion currently exists regarding how the emotionality of a textual stimulus influences its processing. Despite a wealth of research recently being conducted in the area, heterogeneity of stimuli used and methodologies utilized prevented general conclusion from being confidently drawn. This thesis aimed to clarify understanding of cognitive processes associated with emotional textual stimuli by employing well controlled stimuli in a range of simple but innovative paradigms. Emotion words used in this thesis were defined by their valence and arousal ratings. The questions asked here concerned early stages of processing of emotional words, the attention capturing properties of such words, any spill-over effects which would impact the processing of neutral text presented subsequently to the emotional material, and the effect of emotional words on higher cognitive processes such as attitude formation. The first experiment (Chapter 2) manipulated the emotionality of words (positive, negative, neutral) and their frequency (HF – high frequency, LF – low frequency) while ERPs were recorded. An emotion x frequency interaction was found, with emotional LF words responded to fastest, but only positive LF words responded to fastest. Negative HF words were also associated with a large N1 component. Chapter 3 investigated the attention-capturing properties of positive and negative words presented above and below a central fixation cross. The only significant effects appeared when a positive word was presented in the top condition, and a negative word in the bottom condition. Here saccade latencies were longer and there were a fewer number of errors made. Chapter 4 reports an eye tracking study which examined the effect of target words’ emotion (positive, negative, neutral) and their frequency (HF, LF). The pattern of results, produced in a variety of fixation time measurements such as first fixation duration and single fixation duration, was similar to those reported in Chapter 2. The existence of any spill-over effect of emotion onto subsequently presented neutral text was examined in a number of ways. Chapter 5 describes priming with emotional primes and neutral targets but no effect of emotion was found. Chapter 6 employed the same design as Chapter 4 but presented positive, negative or neutral sentences in the middle of neutral paragraphs. It was found that the positive sentences were read fastest, but the neutral sentences following the negative sentences were read faster than those following neutral sentences. Chapters 7 and 8 employed a version of the Velten mood-induction tool to examine the effect of mood when reading emotional text. Chapter 7 was a replication of Chapter 4 with 4 participant groups: positive, negative and neutral mood. While the neutral group showed similar results to those produced in Chapter 4, the positive group only fixated on the positive HF words faster, the negative group showed a frequency effect within each emotional word type, but within HF words positive words were viewed for less time than neutral words. Chapter 8 had participants read 4 product reviews and then afterwards rate each of the products on a set of semantic differentials. This was a 3 (mood: positive, negative, neutral) x 2 (message type: positive negative) x 2 (word type: positive negative). There was no effect of mood but positive messages were read quicker when they contained positive words and negative messages were read quicker when they contained negative words. Participants were asked to recommend each product to individuals in either a prevention in a promotion focus. When the focus was prevention there were additive effects of message and word type, but when the focus was positive there was an interaction, with the positive message conveyed using negative words being rated highest. The same pattern also emerged in the series of semantic differentials. Possible mechanisms to account for these findings are discussed, including many incarnations of McGinnies’s (1949) perceptual defense theory. Future studies should possibly aim to combine the current knowledge with motivational, goal-orientated models such as Higgins’s (1998) theory of regulatory focus.
|
84 |
Not a slave to the rhythm : the perceptual consequences of rhythmic visual stimulationKerlin, Jess Robert January 2016 (has links)
We investigated whether rhythmic visual stimulation leads to changes in visual perception attributable to the entrainment of endogenous alpha-band oscillations. First, we report evidence that the attentional blink phenomenon is not selectively modified by alpha-band rhythmic entrainment. Next, we provide evidence that changes in single target identification following rhythmic stimulation are poorly explained by rhythmic entrainment, but well explained by alternative factors. We report failures to replicate the results of two previous visual entrainment studies supporting the hypothesis that alpha-band rhythmic stimulation leads to matching rhythmic fluctuations in target detection. Finally, we examined whether temporal acuity during an RSVP sequence is dependent on rhythmic entrainment by studying the role of object change on temporal acuity, finding novel results inconsistent with the predictions of the rhythmic entrainment model. We conclude that visual perception is robust against entrainment to task-irrelevant rhythmic visual inputs and that endogenous and externally driven oscillations in the visual system may be functionally distinct.
|
85 |
Semantic and phonological context effects in visual searchTelling, Anna L. January 2008 (has links)
Visual search requires participants to search for a pre-specified target amongst a number of distractors. According to theories of visual search, attention is directed towards the target through a combination of stimulus-driven (bottom-up) and goal-driven (top-down) means. For example, when searching for a red car, top-down attention can prepare the visual system to prioritise items with matching visual properties to the target, e.g., red objects. Theories of visual search support guidance according to visual properties, including the Guided Search model (Wolfe, 1994) and Attentional Engagement Theory (AET: Duncan & Humphreys, 1989). However, whether or not attention can be guided according to non-visual properties of the stimulus, such as semantic and name information, remains controversial (Wolfe & Horowitz, 1994). This thesis studied search for a target (e.g., baseball-bat) in the presence of semantically related (e.g., racquet), phonologically identical (homophones, e.g., animal-bat) and phonologically related distractors (e.g., bag). Participants’ reaction times (RTs), error rates, eye movements and event-related potentials (ERPs) were monitored, and performance compared between young, older adult and brain-damaged individuals. Chapters 2 to 4 report semantic interference for all participant groups; Chapter 5 reports homophone interference in young adults and Chapter 6 reports no interference of phonologically related distractors in search for the target by young adults. The results support search being guided according to semantic and whole-name information about the target only. The mechanisms involved in this interference and contributions of these findings to the theories of visual search will be discussed.
|
86 |
Investigating the determinants of temporal integrationPaul, Liza January 2003 (has links)
Physiological, clinical and empirical studies suggest that visual input is functionally segregated (e.g. Livingstone and Hubel, 1988; Hubel and Livingstone, 1987; Zeki, 1973). Moreover, this functional processing results in concurrently presented feature attributes being processed and perceived at different times (Moutoussis and Zeki, 1998). However, findings from the attentional and categorisation literature call into question a fixed account of feature processing (Posner, 1980; Stelmach and Herdman, 1991; Carrasco and McElree, 2001; Oliva and Schyns; 2000; Goldstone, 1995). In particular, previous research has demonstrated a processing advantage for attended information. From this literature it seems likely that the enhanced saliency of an attribute will accelerate the processing time of this dimension and consequently should modulate any perceptual asynchrony between concurrently presented features. Moreover, if attention offers a selective processing advantage this should induce processing asynchrony between attended and unattended information across the visual field. The present research set out to examine how the visual system constructs a seemingly unified and veridical representation from this asynchronous information. Results add weight to the proposal that visual processing is not synchronous. Secondly, because this asynchrony is revealed in perception it seems that the visual system fails to account for these asynchronies. Finally, asynchrony does not appear to be fixed. Instead the experimental or attentional demands of the task seem to modulate the perceptual processing of attribute or localised information.
|
87 |
Clarifying the neurophysiological basis of the other-race effectVizioli, Luca January 2012 (has links)
The other race effect (ORE) is a well-known phenomenon whereby individuals tend to identify more accurately faces from their same-race (SR) as opposed to faces from the other-race (OR). First reported by Feingold (1914), almost a hundred years ago, since then the ORE has found consistent support at the behavioural level. In spite of a general consensus regarding the robustness of this effect, theoretical accounts have thus far failed to reach an agreement concerning the causes underlying this phenomenon. Two main strands exist within the academic literature, differing on the alleged roots of the ORE. One regards this phenomenon as stemming from different levels of expertise individuals hold with SR and OR face (i.e. the expertise based accounts); the other advocates the importance of social cognitive factor (i.e. the social cognitive accounts). Neuroimaging data can provide important insights in understanding the basis of the ORE. These studies though have thus far failed to reach a degree of consistency. EEG data for example are highly contradictory. A number of studies report no race sensitivity on the N170 face preferential component, while others show that this component is in fact modulated by race. However, discrepancy is found even amongst the study reporting N170 modulation to race, with some showing larger N170 to SR faces, while others revealing the opposite pattern. Similarly, fMRI data show the same degree of inconsistency, especially with regards to the role played by the fusiform face area (FFA). The aim of this thesis is to clarify the neurphysiolological basis of the ORE in order to gain further insights into its origins. To this end three studies (two employing EEG and one fMRI) were designed to answer three main questions related to the ORE: when, how and where in the brain does this phenomenon occur. The first study investigates the conjoint effects of race and the face inversion effect (FIE - regarded as a marker of configural face processing) on the N170. Interestingly, no race modulations on this ERP component were observed for upright faces. Race however impacted upon the magnitude of the electrophysiological FIE, with SR faces leading to greater recognition impairment and eliciting larger N170 amplitudes compared to inverted OR faces. These results indicate that race impacts upon early perceptual stages of face processing and that SR and OR faces are processed in a qualitatively different manner. The second study exploites the advantages conferred by adaptation paradigm to test neural coding efficiency for faces of different races. An unbiased spatiotemporal data-driven analysis on the newly developed single-trial repetition suppression (srRS) index, which fully accounts for the paired nature of the design, revealed differential amounts of repetition suppression across races on the N170 time window. These data suggest the SR faces are coded more efficiently than OR faces and, in line with the previous results, that race is processed at early perceptual stages. The final study investigates whether and where in the brain faces are coded according to the laws predicted by valentine’s norm based multidimensional face space model. Representational Dissimilarity Matrices (RDM) showed that faces are coded as a function of experience within the dominant FFA according to the laws of valentine’s theoretical framework Importantly in all experiments I tested both Western Caucasian (WC) and East Asian (EA) observers viewing WC and EA faces. A crossover interaction between the race of the observers and that of the face stimuli is in fact crucial to genuinely relate any observed effect to race, and exclude potential low level confounds that may be intrinsic in the stimulus set. These data, taken together indicate that the ORE is an expertise based phenomenon and that it takes place at early perceptual level of face processing.
|
88 |
Conscious perception of illusory colourPowell, Georgina January 2012 (has links)
Visual perception can be defined as the ability to interpret the pattern of light entering the eyes to form a reliable, useful representation of the world. A well-accepted perspective suggests that these interpretations are influenced by prior knowledge about the statistics of natural scenes and are generated by combining information from different cues. This thesis investigates how these processes influence our perception of two phenomena: afterimages and colour distortions across the visual field. Both are generated on the retina, do not represent meaningful properties of the physical world, and are rarely perceived during natural viewing. We suggested that afterimage signals are inherently ambiguous and thus are highly influenced by cues that increase or decrease the likelihood that they represent a real object. Consistent with this idea, we found that afterimages are enhanced by contextual edges more so than real stimuli of similar appearance. Moreover, afterimage duration was reduced by saccadic eye-movements relative to fixation, pursuit, and blinking, perhaps because saccades cause an afterimage to move differently to real object and thus provide a cue that the afterimage is illusory. Contextual edges and saccades were found to influence afterimage duration additively, although contextual edges dominated the probability of perceiving an afterimage more than saccades. The final part of the thesis explored the hypothesis that colour distortions across the retina, produced mainly by spectral filtering differences between the periphery and fovea, are compensated in natural viewing conditions. However, we did not find evidence of compensatory mechanisms in the two natural conditions tested, namely eye-movements (as opposed to surface movements) and natural spectra (as opposed to screen-based spectra). Taken together, the experiments in this thesis demonstrate that these ‘illusory’ phenomena perceived strongly in laboratory conditions but rarely during natural viewing, are useful tools to probe how perceptual decisions are made under different conditions.
|
89 |
The role of verbal processing in face recognition memoryNakabayashi, Kazuyo January 2005 (has links)
This dissertation attempts to provide a comprehensive view of the role of verbal processing in face recognition memory by examining some of the neglected issues in two streams of cognitive research, face recognition and verbal overshadowing. Traditionally, research in face recognition focuses on visual and semantic aspects of familiar and unfamiliar face processing, with little acknowledgement of any verbal aspect. By contrast, the verbal overshadowing literature examines the effect of verbal retrieval of unfamiliar face memory on subsequent recognition, with little attention to actual mechanisms underlying processing of these faces. Although both are concerned with our ability to recognise faces, they have proceeded independently as their research focus is diverse. It therefore remains uncertain whether or not face encoding entails verbal processing, and whether or not verbal processing is always detrimental to face recognition. To address these issues, some experimental techniques used in face recognition research were combined with methods from verbal overshadowing research. The first strand of experiments examined configural-visual and featural-verbal processing associations in change recognition tasks. The second strand systematically examined the role of verbal processing in recognition memory by manipulating the degree of verbal involvement during and after encoding. The third strand examined the ‘perceptual expertise’ account of verbal overshadowing in picture recognition memory tasks, involving pictures of familiar and unfamiliar people. The fourth strand directly tested a tentative hypothesis ‘verbal code interference’ to explain verbal overshadowing by manipulating the frequency and time of face verbalisation in line-up identification tasks. The concluding experiment looked at the relation between intentional learning and verbal overshadowing in a recognition memory task using more naturalistic stimuli. The main findings indicate first, that mechanisms underlying face processing appear to be complex, and simple processing associations (configural-visual and featural-verbal processing) cannot be made. Second, face encoding seems to involve some sort of verbal processing which may actually be necessary for successful recognition. Third, post-encoding verbalisation per se does not seem to be the key determiner for recognition impairment. Rather, the interference between verbal representations formed under different contexts seems to harm recognition. Fourth, verbal overshadowing was found only for unfamiliar face picture recognition, but not for familiar face picture recognition, casting a doubt on ‘perceptual expertise account’. Finally, although no clear evidence linking intentional learning and verbal overshadowing was found, intentional learning and verbalisation in combination affected a response pattern. These results were discussed in relation to ongoing debate over causes of the verbal overshadowing effect, which raises an important ecological question as to whether the phenomenon might reflect natural human memory interference.
|
90 |
Control and development of time-based visual selectionZupan, Zorana January 2015 (has links)
Attention plays an integral role in healthy cognitive functioning, and failures of attention can lead to unfavourable and dangerous consequences. As such, comprehending the nature of attentional mechanisms is of fundamental theoretical and practical importance. One way in which humans can attentionally prioritise new information is through top-down inhibition of old distractors, known as the preview benefit (Watson & Humphreys, 1997). In the preview benefit, time is used to efficiently guide visual selection in space. Given that this ability is based on limited resources, its deployment in everyday life may be hindered by a multitude of factors. This thesis will explore the endogenous and exogenous factors that can facilitate or constrain the preview benefit, and determine its developmental trajectory. Understanding the nature of this mechanism (endogenous and exogenous factors) in adults can elucidate the contexts in which visual selection can efficiently filter old distractors. In turn, a developmental perspective can unravel the hidden aspects of this ability and inform when children are endowed to use temporal information for efficient attentional selection. Chapter 1 introduces the theoretical problems and topics of attentional research in adults and children. Chapter 2 addresses the question of endogenous control of top-down inhibition in time-based visual selection – when can top-down inhibition be controlled by the observer? Chapter 3 examines the exogenous influence of complex stimuli on time-based visual selection. Chapters 4 and 5 focus on the development of time-based visual selection for stationary and moving stimuli, respectively, in children aged 6 to 12 years. These chapters also examine the relative association of the efficiency of the preview benefit with the development of executive functions across different age-groups. Overall, the findings suggest that there exist remarkable endogenous and exogenous constraints in how time guides selection. This may account for why in certain contexts, attentional selection can fail to be efficient. Moreover, time-based visual selection shows striking quantitative and qualitative changes over developmental time, and most importantly, children have a long developmental trajectory in learning to ignore moving items. Unlike children, adults’ time-based visual selection is coupled with individual differences in executive functions, highlighting an acquired functional connection. The findings are discussed in terms of their theoretical implications for time-based visual selection, the development of children’s attentional control for distractors, and impact routes for educational and clinical practice, and policy makers.
|
Page generated in 0.0162 seconds