• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 21
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 28
  • 26
  • 22
  • 15
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Co-occurrence of Multisensory Facilitation and Competition in the Human Brain and its Impact on Aging

Diaconescu, Andreea 30 August 2011 (has links)
Perceptual objects often comprise of a visual and auditory signature, which arrives simultaneously through distinct sensory channels, and multisensory features are linked by virtue of being attributed to a specific object. The binding of familiar auditory and visual signatures can be referred to as semantic audiovisual (AV) integration because it involves higher level representations of naturalistic multisensory objects. While integration of semantically related multisensory features is behaviorally advantageous, multisensory competition, or situations of sensory dominance of one modality at the expense of another, impairs performance. Multisensory facilitation and competition effects on performance are exacerbated with age. Older adults show a significantly larger performance gain from bimodal presentations compared to unimodal ones. In the present thesis project, magnetoencephalography (MEG) recordings of semantically related bimodal and unimodal stimuli captured the spatiotemporal patterns underlying both multisensory facilitation and competition in young and older adults. We first demonstrate that multisensory processes unfold in multiple stages: first, posterior parietal neurons respond preferentially to bimodal stimuli; secondly, regions in superior temporal and posterior cingulate cortices detect the semantic category of the stimuli; and finally, at later processing stages, orbitofrontal regions process crossmodal conflicts when complex sounds and pictures are semantically incongruent. Older adults, in contrast to young, are more efficient at integrating semantically congruent multisensory information across auditory and visual channels. Moreover, in these multisensory facilitation conditions, increased neural activity in medial fronto-parietal brain regions predicts faster motor performance in response to bimodal stimuli in older compared to younger adults. Finally, by examining the variability of the MEG signal, we also showed that an increase in local entropy with age is also behaviourally adaptive in the older group as it significantly correlates with more stable and more accurate performance in older compared to young adults.
2

The Co-occurrence of Multisensory Facilitation and Competition in the Human Brain and its Impact on Aging

Diaconescu, Andreea 30 August 2011 (has links)
Perceptual objects often comprise of a visual and auditory signature, which arrives simultaneously through distinct sensory channels, and multisensory features are linked by virtue of being attributed to a specific object. The binding of familiar auditory and visual signatures can be referred to as semantic audiovisual (AV) integration because it involves higher level representations of naturalistic multisensory objects. While integration of semantically related multisensory features is behaviorally advantageous, multisensory competition, or situations of sensory dominance of one modality at the expense of another, impairs performance. Multisensory facilitation and competition effects on performance are exacerbated with age. Older adults show a significantly larger performance gain from bimodal presentations compared to unimodal ones. In the present thesis project, magnetoencephalography (MEG) recordings of semantically related bimodal and unimodal stimuli captured the spatiotemporal patterns underlying both multisensory facilitation and competition in young and older adults. We first demonstrate that multisensory processes unfold in multiple stages: first, posterior parietal neurons respond preferentially to bimodal stimuli; secondly, regions in superior temporal and posterior cingulate cortices detect the semantic category of the stimuli; and finally, at later processing stages, orbitofrontal regions process crossmodal conflicts when complex sounds and pictures are semantically incongruent. Older adults, in contrast to young, are more efficient at integrating semantically congruent multisensory information across auditory and visual channels. Moreover, in these multisensory facilitation conditions, increased neural activity in medial fronto-parietal brain regions predicts faster motor performance in response to bimodal stimuli in older compared to younger adults. Finally, by examining the variability of the MEG signal, we also showed that an increase in local entropy with age is also behaviourally adaptive in the older group as it significantly correlates with more stable and more accurate performance in older compared to young adults.
3

HEAR THIS, READ THAT; AUDIOVISUAL INTEGRATION EFFECTS ON RECOGNITION MEMORY

Wong, Nadia P. January 2015 (has links)
Our experience with the world depends on how we integrate sensory information. Multisensory integration generates contextually rich experiences, which are more distinct and more easily retrievable than their unisensory counterparts. Here, we report a series of experiments examining the impact semantic audiovisual (AV) congruency has on recognition memory. Participants were presented with AV word pairs which could either be the same or different (i.e., hear “ring”, see “phone”) followed by a recognition test. Recognition memory was found to be improved for words following incongruent presentations. Results suggest higher cognitive processes may be recruited to resolve sensory conflicts, leading to superior recognition for incongruent words. Integration may help in easing the processing of multisensory events, but does not promote the processing needed to make them distinctive. / Thesis / Master of Science (MSc)
4

Body Representations in Obesity

Tagini, Sofia 09 December 2019 (has links)
Body representation disorders have a key role in the characterization of obesity. So far, the literature consistently pointed to a negative attitudinal body image. Conversely, after reviewing the pertinent literature, it emerges that more incoherent results have been reported for the self-perceived body size. Chapter 2 tries to clarify this issue by adopting a more innovative theoretical framework (i.e., the implicit/explicit model; Longo, 2015). For the first time, we probed the implicit representation underlying position sense in obesity, reporting a similar representation to healthy weight participants. Importantly, this result shows that not all components of body representation are affected by obesity. Chapter 3 addresses another aspect of body representation that has been neglected in obesity, namely bodily self-consciousness. The Rubber Hand Illusion has been traditionally used to investigate the mechanisms underlying body awareness. Our results show that individuals with obesity have comparable subjective experience of the illusion, while the effect of the illusion on self-location is reduced. This dissociation can be interpreted as the result of a preserved visuo-tactile integration and an altered visuo-proprioceptive integration in obesity. However, in Chapter 4 we reported that individuals with obesity have a reduced temporal resolution of visuo-tactile integration, meaning that they integrated stimuli over an extended range of asynchronies than healthy weight participants. In fact, this evidence predicts that in the RHI individuals with obesity might perceive more synchronously the asynchronous stimulation, showing a greater effect of the illusion also in this condition. Nevertheless, we failed to show this pattern of results in our study with an interval of asynchronous stimulation of 1000 ms (usually adopted in the RHI paradigm). We hypothesized that smaller time-lags, which are inside the temporal binding window of individuals with obesity and outside the temporal binding widow of healthy weight participants, might not be perceived by individuals with obesity but detected by healthy weight individuals. Accordingly, a dissimilar susceptibility to the illusion should be observed. Chapter 5 investigates this issue by adopting a modified version of the RHI that enables a parametrical modulation of the timing of the stimulation. However, we could not replicate the RHI even in healthy weight participants. The possible methodological reasons for this failure are discussed. Overall, this work tries to fill some gaps in the previous literature about body representation in obesity. Moreover, our findings provide an important clue about the possible cognitive mechanisms involved in body representation disorders in obesity. However, many questions still need an answer: due to the complexity of the domain a comprehensive knowledge of the topic might be challenging. A deep understanding of obesity is fundamental to develop multidisciplinary and efficacious rehabilitative protocols. Indeed, better treatments would significantly ameliorate individuals’ well-being but also contribute to reduce the huge health costs related to obesity comorbidities.
5

Multisensory integration of social information in adult aging

Hunter, Edyta Monika January 2011 (has links)
Efficient navigation of our social world depends on the generation, interpretation and combination of social signals within different sensory systems. However, the influence of adult aging on cross-modal integration of emotional stimuli remains poorly understood. Therefore, the aim of this PhD thesis is to understand the integration of visual and auditory cues in social situations and how this is associated with other factors important for successful social interaction such as recognising emotions or understanding the mental states of others. A series of eight experiments were designed to compare the performance of younger and older adults on tasks related to multisensory integration and social cognition. Results suggest that older adults are significantly less accurate at correctly identifying emotions from one modality (faces or voices alone) but perform as well as younger adults on tasks where congruent auditory and visual emotional information are presented concurrently. Therefore, older adults appear to benefit from congruent multisensory information. In contrast, older adults are poorer than younger adults at detecting incongruency from different sensory modalities involved in decoding cues to deception, sarcasm or masking of emotions. It was also found that age differences in the processing of relevant and irrelevant visual and auditory social information might be related to changes in gaze behaviour. A further study demonstrated that the changes in behaviour and social interaction often reported in patients post-stroke might relate to problems in integrating the cross-modal social information. The pattern of findings is discussed in relation to social, emotional, neuropsychological and cognitive theories.
6

Closed-loop prosthetic hand : understanding sensorimotor and multisensory integration under uncertainty

Saunders, Ian January 2012 (has links)
To make sense of our unpredictable world, humans use sensory information streaming through billions of peripheral neurons. Uncertainty and ambiguity plague each sensory stream, yet remarkably our perception of the world is seamless, robust and often optimal in the sense of minimising perceptual variability. Moreover, humans have a remarkable capacity for dexterous manipulation. Initiation of precise motor actions under uncertainty requires awareness of not only the statistics of our environment but also the reliability of our sensory and motor apparatus. What happens when our sensory and motor systems are disrupted? Upper-limb amputees tted with a state-of-the-art prostheses must learn to both control and make sense of their robotic replacement limb. Tactile feedback is not a standard feature of these open-loop limbs, fundamentally limiting the degree of rehabilitation. This thesis introduces a modular closed-loop upper-limb prosthesis, a modified Touch Bionics ilimb hand with a custom-built linear vibrotactile feedback array. To understand the utility of the feedback system in the presence of multisensory and sensorimotor influences, three fundamental open questions were addressed: (i) What are the mechanisms by which subjects compute sensory uncertainty? (ii) Do subjects integrate an artificial modality with visual feedback as a function of sensory uncertainty? (iii) What are the influences of open-loop and closed-loop uncertainty on prosthesis control? To optimally handle uncertainty in the environment people must acquire estimates of the mean and uncertainty of sensory cues over time. A novel visual tracking experiment was developed in order to explore the processes by which people acquire these statistical estimators. Subjects were required to simultaneously report their evolving estimate of the mean and uncertainty of visual stimuli over time. This revealed that subjects could accumulate noisy evidence over the course of a trial to form an optimal continuous estimate of the mean, hindered only by natural kinematic constraints. Although subjects had explicit access to a measure of their continuous objective uncertainty, acquired from sensory information available within a trial, this was limited by a conservative margin for error. In the Bayesian framework, sensory evidence (from multiple sensory cues) and prior beliefs (knowledge of the statistics of sensory cues) are combined to form a posterior estimate of the state of the world. Multiple studies have revealed that humans behave as optimal Bayesian observers when making binary decisions in forced-choice tasks. In this thesis these results were extended to a continuous spatial localisation task. Subjects could rapidly accumulate evidence presented via vibrotactile feedback (an artificial modality ), and integrate it with visual feedback. The weight attributed to each sensory modality was chosen so as to minimise the overall objective uncertainty. Since subjects were able to combine multiple sources of sensory information with respect to their sensory uncertainties, it was hypothesised that vibrotactile feedback would benefit prosthesis wearers in the presence of either sensory or motor uncertainty. The closed-loop prosthesis served as a novel manipulandum to examine the role of feed-forward and feed-back mechanisms for prosthesis control, known to be required for successful object manipulation in healthy humans. Subjects formed economical grasps in idealised (noise-free) conditions and this was maintained even when visual, tactile and both sources of feedback were removed. However, when uncertainty was introduced into the hand controller, performance degraded significantly in the absence of visual or tactile feedback. These results reveal the complementary nature of feed-forward and feed-back processes in simulated prosthesis wearers, and highlight the importance of tactile feedback for control of a prosthesis.
7

Audio-visual interactions in manual and saccadic responses

Makovac, Elena January 2013 (has links)
Chapter 1 introduces the notions of multisensory integration (the binding of information coming from different modalities into a unitary percept) and multisensory response enhancement (the improvement of the response to multisensory stimuli, relative to the response to the most efficient unisensory stimulus), as well as the general goal of the present thesis, which is to investigate different aspects of the multisensory integration of auditory and visual stimuli in manual and saccadic responses. The subsequent chapters report experimental evidence of different factors affecting the multisensory response: spatial discrepancy, stimulus salience, congruency between cross-modal attributes, and the inhibitory influence of concurring distractors. Chapter 2 reports three experiments on the role of the superior colliculus (SC) in multisensory integration. In order to achieve this, the absence of S-cone input to the SC has been exploited, following the method introduced by Sumner, Adamjee, and Mollon (2002). I found evidence that the spatial rule of multisensory integration (Meredith & Stein, 1983) applies only to SC-effective (luminance-channel) stimuli, and does not apply to SC-ineffective (S-cone) stimuli. The same results were obtained with an alternative method for the creation of S-cone stimuli: the tritanopic technique (Cavanagh, MacLeod, & Anstis, 1987; Stiles, 1959; Wald, 1966). In both cases significant multisensory response enhancements were obtained using a focused attention paradigm, in which the participants had to focus their attention on the visual modality and to inhibit responses to auditory stimuli. Chapter 3 reports two experiments showing the influence of shape congruency between auditory and visual stimuli on multisensory integration; i.e. the correspondence between structural aspects of visual and auditory stimuli (e.g., spiky shape and “spiky” sounds). Detection of audio-visual events was faster for congruent than incongruent pairs, and this congruency effect occurred also in a focused attention task, where participants were required to respond only to visual targets and could ignore irrelevant auditory stimuli. This particular type of cross-modal congruency was been evaluated in relation to the inverse effectiveness rule of multisensory integration (Meredith & Stein, 1983). In Chapter 4, the locus of the cross-modal shape congruency was evaluated applying the race model analysis (Miller, 1982). The results showed that the violation of the model is stronger for some congruent pairings in comparison to incongruent pairings. Evidence of multisensory depression was found for some pairs of incongruent stimuli. These data imply a perceptual locus for the cross-modal shape congruency effect. Moreover, it is evident that multisensoriality does not always induce an enhancement, and in some cases, when the attributes of the stimuli are particularly incompatible, a unisensory response may be more effective that the multisensory one. Chapter 5 reports experiments centred on saccadic generation mechanisms. Specifically, the multisensoriality of the saccadic inhibition (SI; Reingold&Stampe, 2002) phenomenon is investigated. Saccadic inhibition refers to a characteristic inhibitory dip in saccadic frequency beginning 60-70 ms after onset of a distractor. The very short latency of SI suggests that the distractor interferes directly with subcortical target selection processes in the SC. The impact of multisensory stimulation on SI was studied in four experiments. In Experiments 7 and 8, a visual target was presented with a concurrent audio, visual or audio-visual distractor. Multisensory audio-visual distractors induced stronger SI than did unisensory distractors, but there was no evidence of multisensory integration (as assessed by a race model analysis). In Experiments 9 and 10, visual, auditory or audio-visual targets were accompanied by a visual distractor. When there was no distractor, multisensory integration was observed for multisensory targets. However, this multisensory integration effect disappeared in the presence of a visual distractor. As a general conclusion, the results from Chapter 5 results indicate that multisensory integration occurs for target stimuli, but not for distracting stimuli, and that the process of audio-visual integration is itself sensitive to disruption by distractors.
8

The Role of Visuomotor Regulation Processes on Perceived Audiovisual Events

Manson, Gerome 05 December 2013 (has links)
Recent evidence suggests audiovisual perception changes as one engages in action. Specifically, if an audiovisual illusion comprised of 2 flashes and 1 beep is presented during the high velocity portion of upper- limb movements, the influence of the auditory stimuli is subdued. The goal of this thesis was to examine if visuomotor regulation processes that rely on information obtained when the limb is traveling at a high velocity could explain this perceptual modulation. In the present study, to control for engagement in visuomotor regulation processes, vision of the environment was manipulated. In conditions without vision of the environment, participants did not show the noted modulation of the audiovisual illusion. Also, analysis of the movement trajectories and endpoint precision revealed that movements without vision were less controlled than movements performed with vision. These results suggest that engagement in visuomotor regulation processes can influence perception of certain audiovisual events during goal-directed action.
9

The Role of Visuomotor Regulation Processes on Perceived Audiovisual Events

Manson, Gerome 05 December 2013 (has links)
Recent evidence suggests audiovisual perception changes as one engages in action. Specifically, if an audiovisual illusion comprised of 2 flashes and 1 beep is presented during the high velocity portion of upper- limb movements, the influence of the auditory stimuli is subdued. The goal of this thesis was to examine if visuomotor regulation processes that rely on information obtained when the limb is traveling at a high velocity could explain this perceptual modulation. In the present study, to control for engagement in visuomotor regulation processes, vision of the environment was manipulated. In conditions without vision of the environment, participants did not show the noted modulation of the audiovisual illusion. Also, analysis of the movement trajectories and endpoint precision revealed that movements without vision were less controlled than movements performed with vision. These results suggest that engagement in visuomotor regulation processes can influence perception of certain audiovisual events during goal-directed action.
10

Multisensory Integration in Social and Nonsocial Events and Emerging Language in Toddlers

Bruce, Madeleine D. 12 1900 (has links)
Multisensory integration enables young children to combine information across their senses to create rich, coordinated perceptual experiences. Events with high intersensory redundancy across the senses provide salient experiences which aid in the integration process and facilitate perceptual learning. Thus, this study’s first objective was to evaluate if toddlers’ multisensory integration abilities generalize across social/nonsocial conditions, and if multisensory integration abilities predict 24-month-old’s language development. Additionally, previous research has not examined contextual factors, such as socioeconomic status or parenting behaviors, that may influence the development of multisensory integration skills. As such, this study’s second aim was to evaluate whether maternal sensitivity and SES moderate the proposed relationship between multisensory integration and language outcomes. Results indicated that toddlers’ multisensory integration abilities, F(1,33) = 4.191, p = .049, but not their general attention control skills, differed as a function of condition (social or nonsocial), and that social multisensory integration significantly predicted toddlers’ expressive vocabularies at 24-months old, β = .530, p = .007. However, no evidence was found to suggest that SES or maternal sensitivity moderated the detected relationship between multisensory integration abilities and language outcomes; rather, mothers’ maternal sensitivity scores directly predicted toddlers’ expressive language outcomes, β = .320, p = .044, in addition to their social multisensory integration skills. These findings suggest that at 24-months of age, both sensitive maternal behaviors and the ability to integrate social multisensory information are important to the development of early expressive language outcomes. / M. S. / Multisensory integration allows children to make sense of information received across their senses. Previous research has shown that events containing simultaneous and overlapping sensory information aid children in learning about objects. However, research has yet to evaluate whether children’s' multisensory integration abilities are related to language learning. Thus, this study’s first goal was to look at whether toddlers are equally skilled at integrating multisensory information in social and nonsocial contexts, and if multisensory integration skills are related to toddlers' language skills. This study’s second goal was to examine whether parenting behaviors and/or familial access to resources (i.e., socioeconomic status) play a role in the hypothesized relationship between multisensory integration and language in toddlerhood. Results indicated that toddlers show better multisensory integration abilities when viewing social as opposed to nonsocial sensory information, and that social multisensory integration skills were significantly related to their language skills. Also, maternal parenting behaviors, but not socioeconomic status, were significantly related to toddlers' language abilities. These findings suggest that at 24-months of age, both sensitive maternal parenting and the ability to integrate social multisensory information are important to the development of language in toddlerhood.

Page generated in 0.1712 seconds