• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 21
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 28
  • 26
  • 22
  • 15
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The Virtual Self : Sensory-Motor Plasticity of Virtual Body-Ownership

Fasthén, Patrick January 2014 (has links)
The distinction between the sense of body-ownership and the sense of agency has attracted considerable empirical and theoretical interest lately. However, the respective contributions of multisensory and sensorimotor integration to these two varieties of body experience are still the subject of ongoing research. In this study, I examine the various methodological problems encountered in the empirical study of body-ownership and agency with the use of novel immersive virtual environment technology to investigate the interplay between sensory and motor information. More specifically, the focus is on testing the relative contributions and possible interactions of visual-tactile and visual-motor contingencies implemented under the same experimental protocol. The effect of this is supported by physiological measurements obtained from skin conductance responses and heart rate. The findings outline a relatively simple method for identifying the necessary and sufficient conditions for the experience of body-ownership and agency, as studied with immersive virtual environment technology.
32

BRAIN-INSPIRED MACHINE LEARNING CLASSIFICATION MODELS

Amerineni, Rajesh 01 May 2020 (has links)
This dissertation focuses on the development of three classes of brain-inspired machine learning classification models. The models attempt to emulate (a) multi-sensory integration, (b) context-integration, and (c) visual information processing in the brain.The multi-sensory integration models are aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli. Two multimodal classification models are introduced: the feature integrating (FI) model and the decision integrating (DI) model. The FI model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The DI model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classifier models are be implemented using multilayer perceptrons and multivariate statistical classifiers. Experiments involving the classification of noisy and attenuated auditory and visual representations of ten digits are designed to demonstrate the properties of the multimodal classifiers and to compare the performances of multimodal and unimodal classifiers. The experimental results show that the multimodal classification systems exhibit an important aspect of the “inverse effectiveness principle” by yielding significantly higher classification accuracies when compared with those of the unimodal classifiers. Furthermore, the flexibility offered by the generalized models enables the simulations and evaluations of various combinations of multimodal stimuli and classifiers under varying uncertainty conditions. The context-integrating model emulates the brain’s ability to use contextual information to uniquely resolve the interpretation of ambiguous stimuli. A deep learning neural network classification model that emulates this ability by integrating weighted bidirectional context into the classification process is introduced. The model, referred to as the CINET, is implemented using a convolution neural network (CNN), which is shown to be ideal for combining target and context stimuli and for extracting coupled target-context features. The CINET parameters can be manipulated to simulate congruent and incongruent context environments and to manipulate target-context stimuli relationships. The formulation of the CINET is quite general; consequently, it is not restricted to stimuli in any particular sensory modality nor to the dimensionality of the stimuli. A broad range of experiments are designed to demonstrate the effectiveness of the CINET in resolving ambiguous visual stimuli and in improving the classification of non-ambiguous visual stimuli in various contextual environments. The fact that the performance improves through the inclusion of context can be exploited to design robust brain-inspired machine learning algorithms. It is interesting to note that the CINET is a classification model that is inspired by a combination of brain’s ability to integrate contextual information and the CNN, which is inspired by the hierarchical processing of visual information in the visual cortex. A convolution neural network (CNN) model, inspired by the hierarchical processing of visual information in the brain, is introduced to fuse information from an ensemble of multi-axial sensors in order to classify strikes such as boxing punches and taekwondo kicks in combat sports. Although CNNs are not an obvious choice for non-array data nor for signals with non-linear variations, it will be shown that CNN models can effectively classify multi-axial multi-sensor signals. Experiments involving the classification of three-axis accelerometer and three-axes gyroscope signals measuring boxing punches and taekwondo kicks showed that the performance of the fusion classifiers were significantly superior to the uni-axial classifiers. Interestingly, the classification accuracies of the CNN fusion classifiers were significantly higher than those of the DTW fusion classifiers. Through training with representative signals and the local feature extraction property, the CNNs tend to be invariant to the latency shifts and non-linear variations. Moreover, by increasing the number of network layers and the training set, the CNN classifiers offer the potential for even better performance as well as the ability to handle a larger number of classes. Finally, due to the generalized formulations, the classifier models can be easily adapted to classify multi-dimensional signals of multiple sensors in various other applications.
33

Atypical Multisensory Integration and the Temporal Binding Window in Autism Spectrum Disorder / 高機能自閉スペクトラム症者の非定型的多感覚統合と時間分解能

Kawakami, Sayaka 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(人間健康科学) / 甲第23125号 / 人健博第87号 / 新制||人健||6(附属図書館) / 京都大学大学院医学研究科人間健康科学系専攻 / (主査)教授 林 悠, 教授 稲富 宏之, 教授 村井 俊哉 / 学位規則第4条第1項該当 / Doctor of Human Health Sciences / Kyoto University / DFAM
34

The Temporal Binding Window in Cross-Modal Sensory Perception : A Systematic Review

Sagré, Erik January 2021 (has links)
Previous research shows that integration of the senses is interchangeably dependent by  temporal neural mechanisms. One unsolved problem is how sensory timing differences in the brain is processed. In this systematic review (K = 18), audio-visual behavioral task paradigms are investigated with a focus on temporal binding window estimates. The results showed among other things that temporal integration is an adaptive neural process and that temporal acuity increases with age. Measurements between studies were sometimes incompatible which limited conclusions. Future studies should focus on standardizing operational parameters and compare within and between group designs.
35

Body Ownership : An Activation Likelihood Estimation Meta-Analysis

Nilsson, Martin January 2020 (has links)
How is it that we feel that we own our body? And how does the brain create this feeling? By manipulating the integration of multisensory signals, researchers have recently begun to probe this question. By creating the illusory experience of owning external body-parts and entire bodies, researchers have investigated the neurofunctional correlates of body ownership. Recent attempts to quantitatively synthesize the neuroimaging literature of body ownership have shown inconsistent results. A large proportion of functional magnetic resonance imaging (fMRI) findings on body ownership includes region of interest (ROI) analysis. This analysis approach produces inflated findings when results are synthesized in meta-analyses. We conducted a systematic search of the fMRI literature of ownership of body-parts and entire bodies. Two activation likelihood estimation (ALE) meta-analyses were conducted, testing the impact of including ROI-based findings. When ROI-based results were included, frontal and posterior parietal multisensory areas were associated with body ownership. However, a whole-brain meta-analysis, excluding ROI-based results, found no significant convergence of activation across the brain. These findings highlight the difficulty of quantitatively synthesizing a neuroimaging field where a large part of the literature is based on findings from ROI analysis. We discuss the difficulty of quantitatively synthesizing results based on ROI analysis and suggest future directions for the study of body ownership within the field of cognitive neuroscience.
36

Induced haltere movements reveal multisensory integration schema in <i>Drosophila</i>

Rauscher, Michael James 21 June 2021 (has links)
No description available.
37

Velocity Influences the Relative Contributions of Visual and Vestibular Cues to Self-Acceleration Perception / Velocity and Self-Acceleration Perception

Kenney, Darren January 2021 (has links)
Self-motion perception is based on the integration of visual (optic flow) and vestibular (inertial) sensory information. Previous research has shown that the relative contribution of visual and vestibular cues can change in real time based on the reliability of that information. The present study assessed whether initial velocity and acceleration magnitude influence the relative contribution of these cues to the detection of self-acceleration. Participants performed a simple response time task with visual and vestibular self-acceleration cues as targets. Visual optic flow was presented at three possible initial velocities of 3, 9, or 15 m/s, and accelerated to result in three possible final velocities of 21, 27, or 33 m/s. Corresponding vestibular cues were presented at magnitudes between 0.01 and 0.04 g. The self-acceleration cues were presented at three possible stimulus onset asynchronies (SOAs): visual-first (by 100 ms), in-sync, and vestibular-first (by 100 ms). We found that presenting the cues in-sync resulted in the fastest responses across all velocities and acceleration magnitudes. Interestingly, presenting the visual cue first resulted in a relative advantage over vestibular-first at the slowest initial velocity of 3 m/s, and vice versa for the fastest initial velocity of 15 m/s. The fastest overall responses for visual-first and in-sync were observed at 9 m/s. The present results support the hypothesis that velocity of optic flow can alter the relative contribution of visual and vestibular cues to the detection of self-acceleration. / Thesis / Master of Science (MSc) / This thesis contributes valuable insight into the emerging literature on how visual and vestibular cues are integrated to result in reliable self-motion perception. Specifically, this thesis provides evidence that velocity of optic flow plays an important role in mediating the relative weighting of visual and vestibular cues during acceleration perception.
38

Nature in VR: A Multisensory Perspective of Artificial Nature Exposure

Mossberg, Alfred, Wall, Kristoffer January 2023 (has links)
A virtual environment can offer a highly immersive experience with a feeling of presence similar to the physical world. Nevertheless, it still lacks several multisensory and emotional properties to fully substitute or replicate the physical world's richness and complexity. Accordingly, this study examines how multisensory integration relates to the immersive and restorative outcomes in an artificial nature paradigm. Our experiment collected behavioral and physiological data through self-report questionnaires and heart rate variability assessment from 30 participants. Notably, due to unforeseen technicalities, the heart rate data was not analyzed. Participants were divided into three conditions comparing audio and visual stimuli.Two conditions were unisensory (visual and auditory), and one was multisensory (audio-visual). We found no statistically significant difference in the level of immersion between unisensory and multisensory conditions, supporting the inconsistency and need formore research regarding the relationship between multisensory integration and immersion. Inrelation to restorativeness, we found a significant difference between audio-visual and audioconditions. Additionally, the medium to strong effect size indicates that visual stimuli substantially influence restorative effects more than audio stimuli. Collectively, in line with previous research, we observed a positive effect on restorativeness from spending time in artificial nature. Despite some limitations, our findings provide guidance for future researchers and contribute to the understanding of immersive multisensory VR experiences and their potential to promote mental rejuvenation and optimize restoration.
39

Funkce vestibulárního systému u pacientů s kochleárním implantátem / Function of Vestibular System in Patients after Cochlear Implantation

Bárta, Martin January 2021 (has links)
Theoretical part of the thesis summarizes state of art in the field of interaction of sound stimuli with the vestibular system and balance control. Further it summarizes the effect of cochlear implantation on the peripheral vestibular structures and on the stance stability. Cochlear implantation is effective way of hearing rehabilitation. Nevertheless surgery in the region of the inner ear results in reduction of function of the peripheral vestibular structures on the implanted side. The functional deficit of the peripheral vestibular system induced by the surgery is well tolerated by patients and quickly spontaneously subside. Sound available to the patients after implantation is one of the important modalities needed for balance control. In patients with balance deficit was found higher reliance on hearing when maintaining stable stance. Some sounds can reduce postural sway. Namely listening to the broadband noise (such as white and pink) results in reduction of postural sway. The balance control also relies on the ability to localize sound source. Information about position of sound source can be utilized as point of reference for driving balance reactions. Experimental part of the thesis quantifies changes in stance stability in patients with cochlear implants using stabilometry. The...
40

The Electrophysiological Correlates of Multisensory Self-Motion Perception

Townsend, Peter January 2022 (has links)
The perception of self-motion draws on inputs from the visual, vestibular and proprioceptive systems. Decades of behavioural research has shed light on constructs such as multisensory weighting, heading perception, and sensory thresholds, that are involved in self-motion perception. Despite the abundance of knowledge generated by behavioural studies, there is a clear lack of research exploring the neural processes associated with full-body, multisensory self-motion perception in humans. Much of what is known about the neural correlates of self-motion perception comes from either the animal literature, or from human neuroimaging studies only administering visual self-motion stimuli. The goal of this thesis was to bridge the gap between understanding the behavioural correlates of full-body self-motion perception, and the underlying neural processes of the human brain. We used a high-fidelity motion simulator to manipulate the interaction of the visual and vestibular systems to gain insights into cognitive processes related to self-motion perception. The present line of research demonstrated that theta, alpha and beta oscillations are the underlying electrophysiological oscillations associated with self-motion perception. Specifically, the three empirical chapters combine to contribute two main findings to our understanding of self-motion perception. First, the beta band is an index of visual-vestibular weighting. We demonstrated that beta event-related synchronization power is associated with visual weighting bias, and beta event-related desynchronization power is associated with vestibular weighting bias. Second, the theta band is associated with direction processing, regardless of whether direction information is provided through the visual or vestibular system. This research is the first of its kind and has opened the door for future research to further develop our understanding of biomarkers related to self-motion perception. / Dissertation / Doctor of Philosophy (PhD) / As we move through the environment, either by walking, or operating a vehicle, our senses collect many different kinds of information that allow us to perceive factors such as, how fast we are moving, which direction we are headed in, or how other objects are moving around us. Many of our senses take in very different information, for example, the vestibular system processes information about our head movements, while our visual system processes information about incoming light waves. Despite how different all of this self-motion information can be, we still manage to have one smooth perception of our bodies moving through the environment. This smooth perception of self-motion is due to our senses sharing information with one another, which is called multisensory integration. Two of the most important senses for collecting information about self-motion are the visual and vestibular systems. To this point, very little is known about the biological processes in the brain while the visual and vestibular systems integrate information about self-motion. Understanding this process is limited because until recently, we have not had the technology or the methodology to adequately record the brain while physically moving people in a virtual environment. Our team developed a ground-breaking set of methodologies to solve this issue, and discovered key insights into brainwave patterns that take place in order for us to perceive ourselves in motion. There were two critical insights from our line of research. First, we identified a specific brainwave frequency (beta oscillations) that indexes integration between the visual and vestibular systems. Second, we demonstrated another brainwave frequency (theta oscillation) that is associated with perceiving which direction we are headed in, regardless of which sense this direction information is coming from. Our research lays the foundation for our understanding of biological processes of self-motion perception and can be applied to diagnosing vestibular disorders or improving pilot simulator training.

Page generated in 0.1535 seconds