• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 13
  • 13
  • 13
  • 13
  • 13
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic visuospatial attention shifts : perceptual correlates, interventions and oscillatory signatures

Ahrens, Merle-Marie January 2018 (has links)
Our visual perception is shaped by both external and internal factors, which continuously compete for limited neural resources. Salient external (exogenous) events capture our attention automatically, whereas internal (endogenous) attention can be directed towards sensory events according to our current behavioural goals. Advances in neuroimaging and brain stimulation have allowed us to begin to map the underlying functional neural architecture mediating both exogenously driven and endogenously controlled visual attention, including electrophysiological techniques such as electroencephalography and magnetoencephalography (EEG/MEG). However, while the neural EEG/MEG correlates of endogenously controlled attention have been investigated in much detail, the neural EEG/MEG correlates of exogenously driven attention are substantially less well understood. One reason for this is that exogenously driven effects are difficult to isolate from the influence of endogenous control processes. In a series of three experiments, I sought to: 1) Study how the perceptual outcomes of both endogenously and exogenously driven attention can be effectively dissociated and investigated. 2) Provide a better understanding of the functional architecture of attention control in regards to its underlying neural substrates and oscillatory signatures, particularly when exogenously driven. To this end, I employed a visuospatial attention paradigm which, by design, behaviourally dissociates exogenous from endogenously driven effects (experiment 1). Furthermore, by utilizing the same behavioural paradigm in combination with neuronavigated MRI-based transcranial magnetic stimulation (TMS) over two key attentional network nodes (i.e., the right intraprarietal sulcus and right temporo-parietal junction), I probed the extent to which the neural substrates of endogenous vs. exogenous orienting are overlapping or can be dissociated (experiment 2). Lastly, I used electroencephalography (EEG) to investigate the oscillatory signatures underlying attention in a task which is typically employed to study exogenous orienting and which putatively triggers exogenous attention in isolation (experiment 3). The results revealed that while exogenous attentional processes can be behaviourally dissociated from endogenous attention (experiment 1), the neural substrates of exogenous attention appear to cover a wide network of attention areas. This includes nodes in both the right ventral attention network (i.e., right temporo-parietal junction) but also the right dorsal network (i.e., the right intraparietal sulcus), which has predominantly been associated with endogenous attention control (experiment 2). Interestingly, even in tasks that have been utilized to test exogenous attentional effects in isolation, endogenous control processes, as indexed by increased mid-frontal theta-band activity, can heavily influence the behavioural outcome (experiment 3). Based on these results, I conclude that there appears to be strong interplay between endogenous control and exogenously driven attention processes. These findings highlight that in order to better understand the functional architecture of (purely) exogenously driven effects, we need to effectively account for the potential influence of endogenous control. One approach to achieve this is by manipulating both types of attention simultaneously instead of in separation, as illustrated in the present work.
2

Predictive feedback to the primary visual cortex during saccades

Edwards, Grace January 2014 (has links)
Perception of our sensory environment is actively constructed from sensory input and prior expectations. These expectations are created from knowledge of the world through semantic memories, spatial and temporal contexts, and learning. Multiple frameworks have been created to conceptualise this active perception, these frameworks will be further referred to as inference models. There are three elements of inference models which have prevailed in these frameworks. Firstly, the presence of internal generative models for the visual environment, secondly feedback connections which project prediction signals of the model to lower cortical processing areas to interact with sensory input, and thirdly prediction errors which are produced when the sensory input is not predicted by feedback signals. The prediction errors are thought to be fed-forward to update the generative models. These elements enable hypothesis driven testing of active perception. In vision, error signals have been found in the primary visual cortex (V1). V1 is organised retinotopically; the structure of sensory stimulus that enters through the retina is retained within V1. A semblance of that structure exists in feedback predictive signals and error signal production. The feedback predictions interact with the retinotopically specific sensory input which can result in error signal production within that region. Due to the nature of vision, we rapidly sample our visual environment using ballistic eye-movements called saccades. Therefore, input to V1 is updated about three times per second. One assumption of active perception frameworks is that predictive signals can update to new retinotopic locations of V1 with sensory input. This thesis investigates the ability of active perception to redirect predictive signals to new retinotopic locations with saccades. The aim of the thesis is to provide evidence of the relevance of generative models in a more naturalistic viewing paradigm (i.e. across saccades). An introduction into active visual perception is provided in Chapter 1. Structural connections and functional feedback to V1 are described at a global level and at the level of cortical layers. The role of feedback connections to V1 is then discussed in the light of current models, which hones in on inference models of perception. The elements of inferential models are introduced including internal generative models, predictive feedback, and error signal production. The assumption of predictive feedback relocation in V1 with saccades is highlighted alongside the effects of saccades within the early visual system, which leads to the motivation and introduction of the research chapters. A psychophysical study is presented in Chapter 2 which provides evidence for the transference of predictive signals across saccades. An internal model of spatiotemporal motion was created using an illusion of motion. The perception of illusory motion signifies the engagement of an internal model as a moving token is internally constructed from the sensory input. The model was tested by presenting in-time (predictable) and out-of-time (unpredictable) targets on the trace of perceived motion. Saccades were initiated across the illusion every three seconds to cause a relocation of predictive feedback. Predictable in-time targets were better detected than the unpredictable out-of-time targets. Importantly, the detection advantage for in-time targets was found 50 – 100 ms after saccade indicating transference of predictive signals across saccade. Evidence for the transfer of spatiotemporally predictive feedback across saccade was supported by the fMRI study presented in Chapter 3. Previous studies have demonstrated an increased activity when processing unpredicted visual stimulation in V1. This activity increase has been related to error signal production as the input was not predicted via feedback signals. In Chapter 3, the motion illusion paradigm used in Chapter 2 was redesigned to be compatible with brain activation analysis. The internal model of motion was created prior to saccade and tested at a post-saccadic retinotopic region of V1. An increased activation was found for spatiotemporally unpredictable stimuli directly after eye-movement, indicating the predictive feedback was projected to the new retinotopic region with saccade. An fMRI experiment was conducted in Chapter 4 to demonstrate that predictive feedback relocation was not limited to motion processing in the dorsal stream. This was achieved by using natural scene images which are known to incorporate ventral stream processing. Multivariate analysis was performed to determine if feedback signals pertaining to natural scenes could relocate to new retinotopic eye-movements with saccade. The predictive characteristic of feedback was also tested by changing the image content across eye-movements to determine if an error signal was produced due to the unexpected post-saccadic sensory input. Predictive feedback was found to interact with the images presented post-saccade, indicating that feedback relocated with saccade. The predictive feedback was thought to contain contextual information related to the image processed prior to saccade. These three chapters provide evidence for inference models contributing to visual perception during more naturalistic viewing conditions (i.e. across saccades). These findings are summarised in Chapter 5 in relation to inference model frameworks, transsacadic perception, and attention. The discussion focuses on the interaction of internal generative models and trans-saccadic perception in the aim of highlighting several consistencies between the two cognitive processes.
3

Developmental trajectories of social signal processing

Ross, Paddy January 2015 (has links)
Most of the social cognitive and affective neuroscience in the past 3 decades has focussed on the face. By contrast, the understanding of processing social cues from the body and voice have been somewhat neglected in the literature. One could argue that, from an evolutionary point of view, body recognition (and particularly emotional body perception) is more important than that of the face. It may be beneficial for survival to be able to predict another’s behaviour or emotional state from a distance, without having to rely on facial expressions. If there are relatively few cognitive and affective neuroscience studies of body and voice perception, there are even fewer on the development of these processes. In this thesis, we set out to explore the behavioural and functional developmental trajectories of body and voice processing in children, adolescents and adults using fMRI, behavioural measures, and a wide range of univariate and multivariate analytical techniques. We found, using simultaneously recorded point-light and full-light displays of affective body movements, an increase in emotion recognition ability until 8.5 years old, followed by a slower rate of accuracy improvement through adolescence into adulthood (Chapter 2). Using fMRI we show, for the first time, that the body-selective areas of the visual cortex are not yet ‘adult-like’ in children (Chapter 3). We go on to show in Chapter 4, that although the body- selective regions are still maturing in the second decade of life, there is no difference between children, adolescents and adults in the amount of emotion modulation that these regions exhibit when presented with happy or angry bodies. We also show a positive correlation between amygdala activation and amount of emotion modulation of the body-selective areas in all subjects except the adolescents. Finally, we turn our attention to the development of the voice- selective areas in the temporal cortex, finding that, contrary to face and body processing, these areas are already ‘adult-like’ in children in terms of strength and extent of activation (Chapter 5). These results are discussed in relation to current developmental literature, limitations are considered, direction for future research is given and the wider clinical application of this work is explored.
4

An investigation into the perceptual and cognitive factors affecting word recognition during normal reading

Hand, Christopher James January 2010 (has links)
The present thesis examines the effects of a range of factors on the processing of written language. The present thesis principally uses eye movement recording technology while participants read short passages of text. Factors known to influence written language processing range from lower-level perceptual constraints to higher-level discourse contingencies. Examples of lower-level to higher-level variables are, respectively, intraword orthographic constraints, such as word-initial letter constraint (WILC) – how many other words share the same three initial letters of a given word; lexical level word frequency – how often a word occurs in written language; and extraword contextual predictability – how likely a word is to occur given the discourse up to the position of the word in the passage. The present thesis not only investigates the main effects of these factors, but also studies the simultaneous effects that these factors have on written language processing. Information acquired from the right of current fixation location – parafoveal preview – is essential for reading to proceed at a normal rate. Preview is typically studied using gaze-contingent display change paradigms – non-foveal text is obscured or manipulated, and effects on eye movement behaviour recorded. The present thesis studies an additional method of measuring the effects of preview, without manipulating the text displayed: launch distance – how far readers’ prior fixation is from a given word, before foveal processing of that word. Visual acuity diminishes as retinal eccentricity increases. The present thesis examines the how the effects of the above factors, and any interactions between them, are modulated by launch distance. Standard effects of frequency and predictability were found across all studies. Lower-frequency words (LF) were processed with greater difficulty than higher-frequency words (HF); low-predictability words (LP) were processed with greater difficulty than (HP) words. Consistent with prior research (Rayner, Ashby, Pollatsek, Reichle, 2004), Experiment 1 found additive effects of frequency and predictability on eye movement behaviour. However, further investigation revealed that when preview was highest (i.e., Near launch distances), frequency and predictability exerted an interactive effect. Experiment 2a further investigated the simultaneous effects of frequency and predictability, addressing methodological concerns about Experiment 1. Principally, that HP contexts in Experiment 1 were medium-predictability (MP), potentially obscuring any interaction, as the acquisition of parafoveal information is influenced by the frequency and predictability of the parafoveal word. Comparing very low-predictability (VLP) items to very high-predictability (VHP) items, the interactive pattern of effects observed in the Near launch distance condition of Experiment 1 was replicated in the global analyses of Experiment 2a. In Experiment 2b, comparisons of HF and LF words in VLP and specifically-designed MP items yielded an additive pattern of effects, consistent with Experiment 1. Furthermore, conditionalised analyses of these items by launch distance showed an interactive pattern of effects, but only at Near launch distances. Conditionalised analyses of HF and LF words in VLP and VHP materials from Experiment 2a revealed an interactive pattern of frequency and predictability effects at both Near and Middle launch distances. It is argued that frequency and predictability can interact under two distinct conditions, but both manners are dependent on preview. When HF and LF words are presented in MP contexts, a high level of preview must be provided by a Near launch distance for an interaction to be observed; when HF and LF words are presented in VHP contexts, sufficient information can be extracted at further launch distances, generating an interactive pattern of effects in global analyses. Experiment 3 examines whether fixation durations are inflated prior to skipping a word in text. An overall non-significant effect of word skipping on prior fixation durations was observed. However, this result was somewhat misleading – inflated fixation durations prior to skipping were observed, but only when to-be-skipped words were either HF or HP; indeed, the largest mean inflation prior to skipping was observed when the to-be-skipped word was both HF and HP. These results suggest that when readers are able to extract most information about parafoveal words (e.g., when those words are HF or HP), fixation durations prior to skipping these words are inflated. It is tentatively suggested that these effects reflect a longer accumulation of information from parafoveal to-be-skipped word. These effects are consistent with models of eye movement control permitting parallel processing of written information, as opposed to a strictly serial approach. Experiments 4a and 4b tested the effects of WILC. Experiment 4a employed a lexical decision task, examining the separate and combined effects of WILC and frequency. LF words were responded to less quickly than HF words. Words with low WILC (LC words; e.g., “clown” shares its initial trigram “clo” with many words) were processed more quickly than words with high WILC (HC words; e.g., “dwarf” shares its initial trigram “dwa” with few words). It is suggested that LC words in a lexical decision task are responded to quickly as their initial trigram is shared by a large number of viable words, facilitating a “word” response. The initial trigrams of HC words are not shared by many other words, potentially hindering a “word” response. Experiment 4b re-tests the role of WILC on eye movement behaviour during reading, based on an earlier study by Lima and Inhoff (1985). Unlike Lima and Inhoff’s study, the frequency and predictability (known to influence the extraction of parafoveal information) of LC and HC target words was also manipulated. In contrast to the findings of Lima and Inhoff (but, consistent with their original prediction), HC words were found to exhibit a processing advantage over LC words in measures of eye movement behaviour reflecting early, lexical processing. Further analyses based on launch distances from, and landing positions within target words suggested that the pattern of effects observed may be due to the accumulation of WILC information from the parafovea. The present thesis finds that word frequency and contextual predictability can yield interactive effects on processing, but that any possible interaction is dependent on acquisition of parafoveal information. Evidence of inflated fixation durations prior to word skipping were observed, but these effects are modulated by the characteristics of the parafoveal to-be-skipped word. Initial letters of words have a substantial effect on processing, but this effect is task-dependent. In lexical decision, activation of “wordness” is advantageous, and LC words exhibit an advantage over HC words. In natural reading, information is available from sentential context and the parafovea, and HC words carry an advantage over LC words. The present thesis argues for the use of launch distance as a metric for measuring preview benefit, albeit in a necessarily post-hoc fashion. Reliable effects of launch distance were found across all experiments where it was examined as a factor – eventual fixation time on a word increases as the distance of prior fixation from beginning of that word increases. Launch distance was also shown to influence the effects of a range of factors which influence written language processing, and the interactions between these variables.
5

Learning and reversal in the sub-cortical limbic system : a computational model

Thompson, Adedoyin Maria January 2010 (has links)
The basal ganglia are a group of nuclei that signal to and from the cerebral cortex. They play an important role in cognition and in the initiation and regulation of normal motor activity. A range of characteristic motor diseases such as Parkinson's and Huntington's have been associated with the degeneration and lesioning of the dopaminergic neurons that target these regions. The study of dopaminergic activity has numerous benefits from understanding how and what effects neurodegenerative diseases have on behavior to determining how the brain responds and adapts to rewards. The study is also useful in understanding what motivates agents to select actions and do the things that they do. The striatum is a major input structure of the basal ganglia and is a target structure of dopaminergic neurons which originate from the mid brain. These dopaminergic neurons release dopamine which is known to exert modulatory influences on the striatal projections. Action selection and control are involved in the dorsal regions of the striatum while the dopaminergic projections to the ventral striatum are involved in reward based learning and motivation. There are many computational models of the dorsolateral striatum and the basal ganglia nuclei which have been proposed as neural substrates for prediction, control and action selection. However, there are relatively few models which aim to describe the role of the ventral striatal nucleus accumbens and its core and shell sub divisions in motivation and reward related learning. This thesis presents a systems level computational model of the sub-cortical nuclei of the limbic system which focusses in particular, on the nucleus accumbens shell and core circuitry. It is proposed that the nucleus accumbens core plays a role in enabling reward driven motor behaviour by acquiring stimulus-response associations which are used to invigorate responding. The nucleus accumbens shell mediates the facilitation of highly rewarding behaviours as well as behavioural switching. In this model, learning is achieved by implementing isotropic sequence order learning and a third factor (ISO-3) that triggers learning at relevant moments. This third factor is modelled by phasic dopaminergic activity which enables long term potentiation to occur during the acquisition of stimulus-reward associations. When a stimulus no longer predicts reward, tonic dopaminergic activity is generated. This enables long term depression. Weak depression has been simulated in the core so that stimulus-response associations which are used to enable instrumental response are not rapidly abolished. However, comparatively strong depression is implemented in the shell so that information about the reward is quickly updated. The shell influences the facilitation of highly rewarding behaviours enabled by the core through a shell-ventral pallido-medio dorsal pathway. This pathway functions as a feed-forward switching mechanism and enables behavioural flexibility. The model presented here, is capable of acquiring associations between stimuli and rewards and simulating reversal learning. In contrast to earlier work, the reversal is modelled by the attenuation of the previously learned behaviour. This allows for the reinstatement of behaviour to recur quickly as observed in animals. The model will be tested in both open- and closed-loop experiments and compared against animal experiments.
6

Diagnostic information use to understand brain mechanisms of facial expression categorization

Petro, Lucy S. January 2010 (has links)
Proficient categorization of facial expressions is crucial for normal social interaction. Neurophysiological, behavioural, event-related potential, lesion and functional neuroimaging techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless process, and the associated arrangement of bilateral networks. These brain areas exhibit consistent and replicable activation patterns, and can be broadly defined to include visual (occipital and temporal), limbic (amygdala) and prefrontal (orbitofrontal) regions. Together, these areas support early perceptual processing, the formation of detailed representations and subsequent recognition of expressive faces. Despite the critical role of facial expressions in social communication and extensive work in this area, it is still not known how the brain decodes nonverbal signals in terms of expression-specific features. For these reasons, this thesis investigates the role of these so-called diagnostic facial features at three significant stages in expression recognition; the spatiotemporal inputs to the visual system, the dynamic integration of features in higher visual (occipitotemporal) areas, and early sensitivity to features in V1. In Chapter 1, the basic emotion categories are presented, along with the brain regions that are activated by these expressions. In line with this, the current cognitive theory of face processing reviews functional and anatomical dissociations within the distributed neural “face network”. Chapter 1 also introduces the way in which we measure and use diagnostic information to derive brain sensitivity to specific facial features, and how this is a useful tool by which to understand spatial and temporal organisation of expression recognition in the brain. In relation to this, hierarchical, bottom-up neural processing is discussed along with high-level, top-down facilitatory mechanisms. Chapter 2 describes an eye-movement study that reveals inputs to the visual system via fixations reflect diagnostic information use. Inputs to the visual system dictate the information distributed to cognitive systems during the seamless and rapid categorization of expressive faces. How we perform eye-movements during this task informs how task-driven and stimulus-driven mechanisms interact to guide the extraction of information supporting recognition. We recorded eye movements of observers who categorized the six basic categories of facial expressions. We use a measure of task-relevant information (diagnosticity) to discuss oculomotor behaviour, with focus on two findings. Firstly, fixated regions reveal expression differences. Secondly, by examining fixation sequences, the intersection of fixations with diagnostic information increases in a sequence of fixations. This suggests a top-down drive to acquire task-relevant information, with different functional roles for first and final fixations. A combination of psychophysical studies of visual recognition together with the EEG (electroencephalogram) signal is used to infer the dynamics of feature extraction and use during the recognition of facial expressions in Chapter 3. The results reveal a process that integrates visual information over about 50 milliseconds prior to the face-sensitive N170 event-related potential, starting at the eye region, and proceeding gradually towards lower regions. The finding that informative features for recognition are not processed simultaneously but in an orderly progression over a short time period is instructive for understanding the processes involved in visual recognition, and in particular the integration of bottom-up and top-down processes. In Chapter 4 we use fMRI to investigate the task-dependent activation to diagnostic features in early visual areas, suggesting top-down mechanisms as V1 traditionally exhibits only simple response properties. Chapter 3 revealed that diagnostic features modulate the temporal dynamics of brain signals in higher visual areas. Within the hierarchical visual system however, it is not known if an early (V1/V2/V3) sensitivity to diagnostic information contributes to categorical facial judgements, conceivably driven by top-down signals triggered in visual processing. Using retinotopic mapping, we reveal task-dependent information extraction within the earliest cortical representation (V1) of two features known to be differentially necessary for face recognition tasks (eyes and mouth). This strategic encoding of face images is beyond typical V1 properties and suggests a top-down influence of task extending down to the earliest retinotopic stages of visual processing. The significance of these data is discussed in the context of the cortical face network and bidirectional processing in the visual system. The visual cognition of facial expression processing is concerned with the interactive processing of bottom-up sensory-driven information and top-down mechanisms to relate visual input to categorical judgements. The three experiments presented in this thesis are summarized in Chapter 5 in relation to how diagnostic features can be used to explore such processing in the human brain leading to proficient facial expression categorization.
7

An optimal control approach to testing theories of human information processing constraints

Chen, Xiuli January 2015 (has links)
This thesis is concerned with explaining human control and decision making behaviours in an integrated framework. The framework provides a means of explaining human behaviour in terms of adaptation to the constraints imposed by both the task environment and the information processing mechanisms of the mind. Some previous approaches tended to have been polarised between those that have focused on rational analyses of the task environment, on the one hand, and those that have focused on the mechanisms that give rise to cognition on the other hand. The former usually is based on the assumption that rational human beings adapt to the external environment by achieving 'goals' defined only by the task environment and with minimal consideration of the mechanisms of the human mind; while the latter focuses on information processing mechanisms that are hypothesised to generate behaviour, e.g., heuristics, or rules. In contrast, in the approach explored in this thesis, mechanism and rationality are tightly integrated. This thesis investigates a \(state\) \(estimation\) \(and\) \(optimal\) \(control\) approach, in which human behavioural strategies and heuristics, rather than being programmed into the model, emerge as a consequence of rational adaptation given a theory of the information processing constraints.
8

Tracking the temporal dynamics of cultural perceptual diversity in visual information processing

Lao, Junpeng January 2014 (has links)
Human perception and cognition processing are not universal. Culture and experience markedly modulate visual information sampling in humans. Cross-cultural studies comparing between Western Caucasians (WCs) and East Asians (EAs) have shown cultural differences in behaviour and neural activities in regarding to perception and cognition. Particularly, a number of studies suggest a local perceptual bias for Westerners (WCs) and a global bias for Easterners (EAs): WCs perceive most efficiently the salient information in the focal object; as a contrast EAs are biased toward the information in the background. Such visual processing bias has been observed in a wide range of tasks and stimuli. However, the underlying neural mechanisms of such perceptual tunings, especially the temporal dynamic of different information coding, have yet to be clarified. Here, in the first two experiments I focus on the perceptual function of the diverse eye movement strategies between WCs and EAs. Human observers engage in different eye movement strategies to gather facial information: WCs preferentially fixate on the eyes and mouth, whereas EAs allocate their gaze relatively more on the center of the face. By employing a fixational eye movement paradigm in Study 1 and electroencephalographic (EEG) recording in study 2, the results confirm the cultural differences in spatial-frequency information tuning and suggest the different perceptual functions of preferred eye movement pattern as a function of culture. The third study makes use of EEG adaptation and hierarchical visual stimulus to access the cultural tuning in global/local processing. Culture diversity driven by selective attention is revealed in the early sensory stage. The results here together showed the temporal dynamic of cultural perceptual diversity. Cultural distinctions in the early time course are driven by selective attention to global information in EAs, whereas late effects are modulated by detail processing of local information in WC observers.
9

Depth of processing and semantic anomalies

Bohan, Jason Thomas January 2008 (has links)
The traditional view of language comprehension is that the meaning of a sentence is composed of the meaning of each word combined into a fully specified syntactic structure. These processes are assumed to be generally completed fully and automatically. However, there is increasing evidence that these processes may, in some circumstances, not be completed fully, and the resultant representation, underspecified. This is taken as evidence for shallow processing and is best typified, we argue, when readers fail to detect semantically anomalous words in a sentence. For example, when asked, “how many animals did Moses take on the Ark?” readers often incorrectly answer “two” failing to notice that it was Noah and not Moses who built the Ark. There has been surprisingly little work carried out on the on-line processing of these types of anomalies, and the differences in processing when anomalies are detected or missed. This thesis presents a series of studies, including four eye-tracking and one ERP study that investigates the nature of shallow processing as evidenced when participants report, or fail to report, hard-to-detect semantic anomalies. The main findings are that semantic anomaly detection is not immediate, but slightly delayed. Anomaly detection results in severe disruption in the eye movement data, and a late positivity in ERPs. There was some evidence that non-detected anomalies were processed unconsciously in both the eye movement record or in ERPs, however effects were weak and require replication. The rate of anomaly detection is also shown to be modulated by processing load and experimental task instructions. The discussion considers what these results reveal about the nature of shallow processing.
10

Non-engagement in psychosis : a narrative analysis of service-users’ experiences of relationships with mental health services

Grinter, David John January 2012 (has links)
Introduction: Non-engagement with treatment is a familiar problem for health services and has been identified as a particularly important issue for those who experience psychosis. The therapeutic relationship between service-users and clinicians is considered to be crucial to good engagement. The extent to which requirements of engagement with treatments and mental health services represent a threat to the individual’s autonomy is a potential factor in non-engagement. Reactance theory has attempted to explain this phenomenon. However, relationships are complex and reactance theory does not reflect this. The exploration of narratives is an opportunity to develop an understanding of the intricacies of these therapeutic relationships. Methods : Interviews were conducted with 11 participants who were recovering from an episode of psychosis. Narrative Analysis of the transcripts was undertaken. During the process interpretation of the transcripts required the introduction of Dialogical Self Theory. Results: Three self-positions were identified through which participant’s narrated their experiences. Defiant, Subordinate and Reflective-Conciliatory positions were described. Discussion: Narratives surrounding recovery and engagement with services can appear complex, contradictory and fragmented. They are narrated by different self-positions. This understanding of the complexity of narratives may be helpful in guiding clinicians in maintaining a wider awareness of the multidimensional nature of individuals’ understandings of their experiences of recovery and relationships with services.

Page generated in 0.1427 seconds