• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 28
  • 13
  • 12
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 216
  • 77
  • 48
  • 32
  • 28
  • 27
  • 26
  • 25
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cognitive influences on the crossed-hands deficit: An investigation of the dynamic nature of tactile processing

Lorentz, Lisa January 2021 (has links)
Theories of tactile localization ability are based largely on the study of crossing effects, in which crossing the hands leads to a significant impairment in performance. This work has resulted in a rich literature that establishes tactile localization as inherently multisensory in nature. However, new work suggests that the studies used to date have made incorrect assumptions about the processes underlying performance (Maij et al., 2020) and the perceptual information that is considered (Badde et al., 2019). This thesis proposes the addition of a new parameter to existing theory that allows for these new results to be incorporated into the existing literature—specifically, the influence of cognitive factors on performance. The Introduction provides an overview of the current state of the literature, as well as the novel findings that seem to contradict it. I then propose a framework that highlights the malleability of tactile localization. The empirical work focuses on previously unexplored cognitive influences on tactile localization performance. In Chapter 2 I demonstrate that visual imagery influences performance, and importantly, that individual differences in visual imagery ability influence imagery’s effect on performance. In Chapter 3 I demonstrate that an individual’s attentional set influences performance, and that results previously thought to be due to changes in perceptual signal are likely due to changes in attentional focus. In Chapter 4 I highlight the biases in theory and measurement practice that have limited our understanding of tactile localization more broadly. The General Discussion then provides a detailed discussion about how to incorporate the findings of this thesis with existing literature, which requires a paradigm shift to how we view tactile localization. / Dissertation / Doctor of Philosophy (PhD) / Our ability to localize tactile stimuli is critical to successfully interact with our environment: if we feel something crawling on us, we need to eliminate this unwanted visitor as quickly and accurately as possible. A large body of evidence suggests that tactile localization requires perceptual signals beyond the somatotopic information about where on your skin you feel the tactile stimulus. Just think about how much easier it is to swat at a bug on your arm when you can see it as well as feel it. In this thesis I provide novel empirical evidence that cognitive factors also influence our ability to engage in tactile localization, including visual imagery and attention. I then propose an update to existing theory that can account for the influence of these cognitive factors, alongside the traditional approach to the integration of perceptual signals such as vision.
2

Meaning and emplacement in expressive immersive virtual environments

Morie, Jacquelyn Ford January 2007 (has links)
From my beginnings as an artist, my work has always been created with the goal of evoking strong emotional responses from those who experience it. I wanted to wrap my work around the viewers have it encompass them completely. When virtual reality came along, 1 knew I had found my true medium. I could design the space, bring people inside and see what they did there. I was always excited to see what the work would mean to them, what they brought to it, what I added, and what they took away.
3

The Co-occurrence of Multisensory Facilitation and Competition in the Human Brain and its Impact on Aging

Diaconescu, Andreea 30 August 2011 (has links)
Perceptual objects often comprise of a visual and auditory signature, which arrives simultaneously through distinct sensory channels, and multisensory features are linked by virtue of being attributed to a specific object. The binding of familiar auditory and visual signatures can be referred to as semantic audiovisual (AV) integration because it involves higher level representations of naturalistic multisensory objects. While integration of semantically related multisensory features is behaviorally advantageous, multisensory competition, or situations of sensory dominance of one modality at the expense of another, impairs performance. Multisensory facilitation and competition effects on performance are exacerbated with age. Older adults show a significantly larger performance gain from bimodal presentations compared to unimodal ones. In the present thesis project, magnetoencephalography (MEG) recordings of semantically related bimodal and unimodal stimuli captured the spatiotemporal patterns underlying both multisensory facilitation and competition in young and older adults. We first demonstrate that multisensory processes unfold in multiple stages: first, posterior parietal neurons respond preferentially to bimodal stimuli; secondly, regions in superior temporal and posterior cingulate cortices detect the semantic category of the stimuli; and finally, at later processing stages, orbitofrontal regions process crossmodal conflicts when complex sounds and pictures are semantically incongruent. Older adults, in contrast to young, are more efficient at integrating semantically congruent multisensory information across auditory and visual channels. Moreover, in these multisensory facilitation conditions, increased neural activity in medial fronto-parietal brain regions predicts faster motor performance in response to bimodal stimuli in older compared to younger adults. Finally, by examining the variability of the MEG signal, we also showed that an increase in local entropy with age is also behaviourally adaptive in the older group as it significantly correlates with more stable and more accurate performance in older compared to young adults.
4

The Co-occurrence of Multisensory Facilitation and Competition in the Human Brain and its Impact on Aging

Diaconescu, Andreea 30 August 2011 (has links)
Perceptual objects often comprise of a visual and auditory signature, which arrives simultaneously through distinct sensory channels, and multisensory features are linked by virtue of being attributed to a specific object. The binding of familiar auditory and visual signatures can be referred to as semantic audiovisual (AV) integration because it involves higher level representations of naturalistic multisensory objects. While integration of semantically related multisensory features is behaviorally advantageous, multisensory competition, or situations of sensory dominance of one modality at the expense of another, impairs performance. Multisensory facilitation and competition effects on performance are exacerbated with age. Older adults show a significantly larger performance gain from bimodal presentations compared to unimodal ones. In the present thesis project, magnetoencephalography (MEG) recordings of semantically related bimodal and unimodal stimuli captured the spatiotemporal patterns underlying both multisensory facilitation and competition in young and older adults. We first demonstrate that multisensory processes unfold in multiple stages: first, posterior parietal neurons respond preferentially to bimodal stimuli; secondly, regions in superior temporal and posterior cingulate cortices detect the semantic category of the stimuli; and finally, at later processing stages, orbitofrontal regions process crossmodal conflicts when complex sounds and pictures are semantically incongruent. Older adults, in contrast to young, are more efficient at integrating semantically congruent multisensory information across auditory and visual channels. Moreover, in these multisensory facilitation conditions, increased neural activity in medial fronto-parietal brain regions predicts faster motor performance in response to bimodal stimuli in older compared to younger adults. Finally, by examining the variability of the MEG signal, we also showed that an increase in local entropy with age is also behaviourally adaptive in the older group as it significantly correlates with more stable and more accurate performance in older compared to young adults.
5

EXPLORING REFERENCE FRAME INTEGRATION USING THE CROSSED-HANDS DEFICIT

Unwalla, Kaian January 2021 (has links)
You can only perceive the location of a touch when you know where your hands are in space. Locating a touch to the body requires the integration of internal (somatotopic) and external (spatial) reference frames. In order to explore the relative contribution of internal versus external information, this thesis employed a crossed-hands tactile temporal order judgment (TOJ) task. This task requires participants to indicate which of two vibrations, one to each hand, occurred first. The magnitude of the deficit observed when the hands are crossed over the midline provides an index into how internal and external reference frames are integrated. This thesis first showed that the crossed-hands tactile TOJ task is a reliable measure, supporting its use as a measure of reference frame integration. Next, this thesis applied a probabilistic model to theoretically estimate the weights placed on the internal and external reference frames. We showed that a bias towards external information results in a larger external weight and vice versa for internal information. Finally, using the model we showed that the crossed-hands deficit is reduced while lying down, supporting an influence of vestibular information on the external reference frame. Taken together, this thesis highlights that we are able to flexibly adapt the weighting of different spatial representations of touch. / Thesis / Doctor of Science (PhD) / Determining the boundary of our body requires we localize the touches to our body. When the body moves and interacts with the world this determination becomes more difficult. Integrating information from other senses can support the localization of touch, and thus knowledge of our body. For example, to locate a touch to your right hand, you must feel the touch on your right hand, but also determine where your right hand is located in space. This thesis shows that the contributions of each sense to locate a touch is consistent within an individual and remains consistent over time. Interestingly, based on the availability of each sense, we flexibly adapt their contributions to ensure that our ability to locate the touch remains unchanged. What we define as our body is constructed based on the information available in the present moment.
6

HEAR THIS, READ THAT; AUDIOVISUAL INTEGRATION EFFECTS ON RECOGNITION MEMORY

Wong, Nadia P. January 2015 (has links)
Our experience with the world depends on how we integrate sensory information. Multisensory integration generates contextually rich experiences, which are more distinct and more easily retrievable than their unisensory counterparts. Here, we report a series of experiments examining the impact semantic audiovisual (AV) congruency has on recognition memory. Participants were presented with AV word pairs which could either be the same or different (i.e., hear “ring”, see “phone”) followed by a recognition test. Recognition memory was found to be improved for words following incongruent presentations. Results suggest higher cognitive processes may be recruited to resolve sensory conflicts, leading to superior recognition for incongruent words. Integration may help in easing the processing of multisensory events, but does not promote the processing needed to make them distinctive. / Thesis / Master of Science (MSc)
7

IS BAYESIAN UPDATING MODALITY-DEPENDENT?

Fait, Stefano 13 May 2022 (has links)
In a Bayesian perspective, the probabilistic dependencies between hypotheses under consideration and diagnostic pieces of evidence are the only relevant information for probabilistic updating. We investigated whether human probability judgments conform to this assumption, by manipulating the sensory systems involved in the acquisition and processing of information concerning evidence and hypotheses. Hence, we ran five (computer-based) experiments using a variant of the classic book bag and poker chip task (e.g., Phillips & Edwards, 1966). Participants were first presented with pairs of urns A and B filled with a different proportion of balls that turned either red or green in the visual condition, balls that emitted either a low- or high-pitched sound in the auditory condition, and balls that both turned a color and emitted a sound in various cross-modal (i.e., audio-visual) conditions. One urn was then selected at random, some balls were randomly drawn from it, and their color and/or sound were disclosed. Participants’ task was to estimate the probability that each of the two urns has been selected, given the information provided. In Experiments 1 and 2, we compared the probability judgments referring to probabilistically identical visual and auditory scenarios that only differed with regards to the sensory system involved, without finding any difference between the answers provided in the two conditions. In Experiment 3, 4, and 5, the addition of cross-modal scenarios allowed us to investigate the effects on probabilistic updating of synergic (i.e., both visual and auditory evidence individually supported the hypothesis they jointly supported) or contrasting (i.e., either visual and/or auditory evidence individually supported the hypothesis opposite the one they jointly supported) audio-visual evidence. Our results provide evidence in favor of a synergy-contrasting effect, as probability judgments were more accurate in synergic conditions than in contrasting conditions. This suggests that, when perceptual information is acquired through a singular sensory system, probability judgments conform to the Bayesian assumption that the sensory system involved does not play a role in the updating process, whereas the simultaneous presentation of cross-modal information can influence participants’ performance.
8

Body Representations in Obesity

Tagini, Sofia 09 December 2019 (has links)
Body representation disorders have a key role in the characterization of obesity. So far, the literature consistently pointed to a negative attitudinal body image. Conversely, after reviewing the pertinent literature, it emerges that more incoherent results have been reported for the self-perceived body size. Chapter 2 tries to clarify this issue by adopting a more innovative theoretical framework (i.e., the implicit/explicit model; Longo, 2015). For the first time, we probed the implicit representation underlying position sense in obesity, reporting a similar representation to healthy weight participants. Importantly, this result shows that not all components of body representation are affected by obesity. Chapter 3 addresses another aspect of body representation that has been neglected in obesity, namely bodily self-consciousness. The Rubber Hand Illusion has been traditionally used to investigate the mechanisms underlying body awareness. Our results show that individuals with obesity have comparable subjective experience of the illusion, while the effect of the illusion on self-location is reduced. This dissociation can be interpreted as the result of a preserved visuo-tactile integration and an altered visuo-proprioceptive integration in obesity. However, in Chapter 4 we reported that individuals with obesity have a reduced temporal resolution of visuo-tactile integration, meaning that they integrated stimuli over an extended range of asynchronies than healthy weight participants. In fact, this evidence predicts that in the RHI individuals with obesity might perceive more synchronously the asynchronous stimulation, showing a greater effect of the illusion also in this condition. Nevertheless, we failed to show this pattern of results in our study with an interval of asynchronous stimulation of 1000 ms (usually adopted in the RHI paradigm). We hypothesized that smaller time-lags, which are inside the temporal binding window of individuals with obesity and outside the temporal binding widow of healthy weight participants, might not be perceived by individuals with obesity but detected by healthy weight individuals. Accordingly, a dissimilar susceptibility to the illusion should be observed. Chapter 5 investigates this issue by adopting a modified version of the RHI that enables a parametrical modulation of the timing of the stimulation. However, we could not replicate the RHI even in healthy weight participants. The possible methodological reasons for this failure are discussed. Overall, this work tries to fill some gaps in the previous literature about body representation in obesity. Moreover, our findings provide an important clue about the possible cognitive mechanisms involved in body representation disorders in obesity. However, many questions still need an answer: due to the complexity of the domain a comprehensive knowledge of the topic might be challenging. A deep understanding of obesity is fundamental to develop multidisciplinary and efficacious rehabilitative protocols. Indeed, better treatments would significantly ameliorate individuals’ well-being but also contribute to reduce the huge health costs related to obesity comorbidities.
9

Multisensory integration of social information in adult aging

Hunter, Edyta Monika January 2011 (has links)
Efficient navigation of our social world depends on the generation, interpretation and combination of social signals within different sensory systems. However, the influence of adult aging on cross-modal integration of emotional stimuli remains poorly understood. Therefore, the aim of this PhD thesis is to understand the integration of visual and auditory cues in social situations and how this is associated with other factors important for successful social interaction such as recognising emotions or understanding the mental states of others. A series of eight experiments were designed to compare the performance of younger and older adults on tasks related to multisensory integration and social cognition. Results suggest that older adults are significantly less accurate at correctly identifying emotions from one modality (faces or voices alone) but perform as well as younger adults on tasks where congruent auditory and visual emotional information are presented concurrently. Therefore, older adults appear to benefit from congruent multisensory information. In contrast, older adults are poorer than younger adults at detecting incongruency from different sensory modalities involved in decoding cues to deception, sarcasm or masking of emotions. It was also found that age differences in the processing of relevant and irrelevant visual and auditory social information might be related to changes in gaze behaviour. A further study demonstrated that the changes in behaviour and social interaction often reported in patients post-stroke might relate to problems in integrating the cross-modal social information. The pattern of findings is discussed in relation to social, emotional, neuropsychological and cognitive theories.
10

Closed-loop prosthetic hand : understanding sensorimotor and multisensory integration under uncertainty

Saunders, Ian January 2012 (has links)
To make sense of our unpredictable world, humans use sensory information streaming through billions of peripheral neurons. Uncertainty and ambiguity plague each sensory stream, yet remarkably our perception of the world is seamless, robust and often optimal in the sense of minimising perceptual variability. Moreover, humans have a remarkable capacity for dexterous manipulation. Initiation of precise motor actions under uncertainty requires awareness of not only the statistics of our environment but also the reliability of our sensory and motor apparatus. What happens when our sensory and motor systems are disrupted? Upper-limb amputees tted with a state-of-the-art prostheses must learn to both control and make sense of their robotic replacement limb. Tactile feedback is not a standard feature of these open-loop limbs, fundamentally limiting the degree of rehabilitation. This thesis introduces a modular closed-loop upper-limb prosthesis, a modified Touch Bionics ilimb hand with a custom-built linear vibrotactile feedback array. To understand the utility of the feedback system in the presence of multisensory and sensorimotor influences, three fundamental open questions were addressed: (i) What are the mechanisms by which subjects compute sensory uncertainty? (ii) Do subjects integrate an artificial modality with visual feedback as a function of sensory uncertainty? (iii) What are the influences of open-loop and closed-loop uncertainty on prosthesis control? To optimally handle uncertainty in the environment people must acquire estimates of the mean and uncertainty of sensory cues over time. A novel visual tracking experiment was developed in order to explore the processes by which people acquire these statistical estimators. Subjects were required to simultaneously report their evolving estimate of the mean and uncertainty of visual stimuli over time. This revealed that subjects could accumulate noisy evidence over the course of a trial to form an optimal continuous estimate of the mean, hindered only by natural kinematic constraints. Although subjects had explicit access to a measure of their continuous objective uncertainty, acquired from sensory information available within a trial, this was limited by a conservative margin for error. In the Bayesian framework, sensory evidence (from multiple sensory cues) and prior beliefs (knowledge of the statistics of sensory cues) are combined to form a posterior estimate of the state of the world. Multiple studies have revealed that humans behave as optimal Bayesian observers when making binary decisions in forced-choice tasks. In this thesis these results were extended to a continuous spatial localisation task. Subjects could rapidly accumulate evidence presented via vibrotactile feedback (an artificial modality ), and integrate it with visual feedback. The weight attributed to each sensory modality was chosen so as to minimise the overall objective uncertainty. Since subjects were able to combine multiple sources of sensory information with respect to their sensory uncertainties, it was hypothesised that vibrotactile feedback would benefit prosthesis wearers in the presence of either sensory or motor uncertainty. The closed-loop prosthesis served as a novel manipulandum to examine the role of feed-forward and feed-back mechanisms for prosthesis control, known to be required for successful object manipulation in healthy humans. Subjects formed economical grasps in idealised (noise-free) conditions and this was maintained even when visual, tactile and both sources of feedback were removed. However, when uncertainty was introduced into the hand controller, performance degraded significantly in the absence of visual or tactile feedback. These results reveal the complementary nature of feed-forward and feed-back processes in simulated prosthesis wearers, and highlight the importance of tactile feedback for control of a prosthesis.

Page generated in 0.609 seconds