• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 21
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 28
  • 26
  • 22
  • 15
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Temporal Window of Visuotactile Integration

Zhou, Yichu January 2016 (has links)
The simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks are the two widely used methods for measuring the window of multisensory integration; however, there are some indications that these two tasks involve different cognitive processes and therefore produce unrelated results. The present study measured observers’ visuotactile window of integration using these two tasks in order to examine whether or not SJs and TOJs produce consistent results for this particular pairing of modalities. Experiment 1 revealed no significant correlations between the SJ and TOJ tasks, indicating that they appear to measure distinct processes in visuotactile integration, and in addition showed that both sensory and decisional factors contribute to this difference. These findings were replicated in Experiment 2, which, along with Experiment 3, also showed that the reliability of the SJ and TOJ tasks may in part be responsible for the lack of agreement between these two tasks. A secondary result concerned the point of subjective simultaneity (PSS), which were tactile-leading across all three experiments. This contradicts some of the previous literature in visuotactile integration. Manipulating the spatial distance between the visual and tactile stimulus (Experiment 2) and the certainty of stimulus location (Experiment 3) did not lead to significant changes of the location of the PSS. / Thesis / Master of Science (MSc) / Perception often involves the use of more than one sensory modality at the same time; for example, touching an object usually produces sensory signals in the visual and tactile modalities. Since the amount of time needed to transmit and process sensory signals is different among the modalities, the brain allows for a certain time difference between signals of various pairs of modalities that it will consider as coming from one event. Two tasks commonly used to measure these allowable time differences are the simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks. Although they are usually used interchangeably, the present data show that the results from these tasks in the visuotactile pairing of modalities are unrelated, and a major contributing reason appears to be that these tasks are not the most reliable.
12

Multisensory Integration in Social and Nonsocial Events and Emerging Language in Toddlers

Bruce, Madeleine D. 12 1900 (has links)
Multisensory integration enables young children to combine information across their senses to create rich, coordinated perceptual experiences. Events with high intersensory redundancy across the senses provide salient experiences which aid in the integration process and facilitate perceptual learning. Thus, this study’s first objective was to evaluate if toddlers’ multisensory integration abilities generalize across social/nonsocial conditions, and if multisensory integration abilities predict 24-month-old’s language development. Additionally, previous research has not examined contextual factors, such as socioeconomic status or parenting behaviors, that may influence the development of multisensory integration skills. As such, this study’s second aim was to evaluate whether maternal sensitivity and SES moderate the proposed relationship between multisensory integration and language outcomes. Results indicated that toddlers’ multisensory integration abilities, F(1,33) = 4.191, p = .049, but not their general attention control skills, differed as a function of condition (social or nonsocial), and that social multisensory integration significantly predicted toddlers’ expressive vocabularies at 24-months old, β = .530, p = .007. However, no evidence was found to suggest that SES or maternal sensitivity moderated the detected relationship between multisensory integration abilities and language outcomes; rather, mothers’ maternal sensitivity scores directly predicted toddlers’ expressive language outcomes, β = .320, p = .044, in addition to their social multisensory integration skills. These findings suggest that at 24-months of age, both sensitive maternal behaviors and the ability to integrate social multisensory information are important to the development of early expressive language outcomes. / M. S. / Multisensory integration allows children to make sense of information received across their senses. Previous research has shown that events containing simultaneous and overlapping sensory information aid children in learning about objects. However, research has yet to evaluate whether children’s' multisensory integration abilities are related to language learning. Thus, this study’s first goal was to look at whether toddlers are equally skilled at integrating multisensory information in social and nonsocial contexts, and if multisensory integration skills are related to toddlers' language skills. This study’s second goal was to examine whether parenting behaviors and/or familial access to resources (i.e., socioeconomic status) play a role in the hypothesized relationship between multisensory integration and language in toddlerhood. Results indicated that toddlers show better multisensory integration abilities when viewing social as opposed to nonsocial sensory information, and that social multisensory integration skills were significantly related to their language skills. Also, maternal parenting behaviors, but not socioeconomic status, were significantly related to toddlers' language abilities. These findings suggest that at 24-months of age, both sensitive maternal parenting and the ability to integrate social multisensory information are important to the development of language in toddlerhood.
13

Human Olfactory Perception: Characteristics, Mechanisms and Functions

Chen, Jennifer 16 September 2013 (has links)
Olfactory sensing is ubiquitous across animals and important for survival. Yet, its characteristics, mechanisms, and functions in humans remain not well understood. In this dissertation, I present four studies on human olfactory perception. Study I investigates the impact of short-term exposures to an odorant on long-term olfactory learning and habituation, while Study II examines human ability to localize smells; Study III probes visual-olfactory integration of object representations, and Study IV explores the role of olfaction in sensing nutrients. Several conclusions are drawn from these studies. First, brief intermittent exposures to even a barely detectable odorant lead to long-term incremental odorant-specific habituation. Second, humans localize smells based on gradient cues between the nostrils. Third, there is a within-hemispheric advantage in the integration of visual-olfactory object representations. Fourth, olfaction partakes in nutrient-sensing and facilitates the detection of food. Some broader implications of our findings are discussed.
14

Multisensory integration of spatial cues in old age

Bates, Sarah Louise January 2015 (has links)
Spatial navigation is essential for everyday function. It is successfully achieved by combining internally generated information – such vestibular and self-motion cues (known as path integration) – with external sources of information such as visual landmarks. These multiple sources and sensory domains are often associated with uncertainty and can provide conflicting information. The key to successful navigation is therefore how to integrate information from these internal and external sources in the best way. Healthy younger adults do this in a statistically optimal fashion by considering the perceived reliability of a cue during integration, consistent with the rules of Bayesian integration. However, the precise impact of ageing on the component senses of path integration and integration of such self-motion with external information is currently unclear. Given that impaired spatial ability is a common problem associated with ageing and is often a primary indicator of Alzheimer’s disease, this thesis asks whether age-related navigational impairments are related to fundamental deficits in the components of path integration and/or inadequate integration of spatial cues. Part 1 focussed on how ageing impacts the vestibular, kinaesthetic and visual components of path integration during linear navigation in the real world. Using path reproduction, distance estimation and depth perception tasks, I found that older adults showed no performance deficits in conditions that replicated those of everyday walking when visual and self-motion cues were present. However, they were impaired when relying on vestibular information alone. My results suggest that older adults are especially vulnerable to sensory deprivation but that weaker sensory domains can be compensated for by other sensory information, potentially by integrating different spatial cues in a Bayesian fashion: where the impact of unreliable/diminished senses can be minimised. Part 2 developed the conclusions of Part 1 by testing younger and older adults’ integration of visual landmarks and self-motion information during a simple homing task. I investigated the hypothesis that the integration of spatial information from multiple sensory domains is driven by Bayesian principles and that old age may affect the efficiency and elasticity of reliability-driven integration. Younger and older participants navigated to a previously visited location using self-motion and/or visual information. In some trials there was a conflict of information, which revealed the relative influence of self-motion and visual landmarks on behaviour. Findings revealed that both younger and older adults integrated visual and self-motion information to improve accuracy and precision, but older adults did not place as much influence on visual information as would have been optimal. This may have been the result of increased noise in the underlying spatial representations of older adults. Furthermore, older adults did not effectively re-weight visual and self-motion cues in line with the changing reliability of visual information, suggesting diminished plasticity in the underlying spatial representations. However, further development of the testing paradigm would strengthen support for these findings. Together, the findings of Part 2 suggest that increased neural noise and the suboptimal weighting of spatial cues might contribute to the common problems with navigation experienced by many older adults. This thesis provides original evidence for age-related changes to multisensory integration of spatial cues. Path integration abilities are relatively preserved when older adults navigate linear paths in the real world, despite loss of vestibular function. However, navigation is affected by old age when the task becomes more complex. Multisensory integration of spatial cues is partially preserved but it is not fully efficient. I offer evidence that the navigational impairments common to old age are related to fundamental deficits in the components of path integration, task complexity, and suboptimal integration of spatial cues. Crucially however, path integration is preserved sufficiently in older adults that they are able to navigate in small scale with relative success.
15

Neural Mechanisms of Sensory Integration: Frequency Domain Analysis of Spike and Field Potential Activity During Arm Position Maintenance with and Without Visual Feedback

January 2017 (has links)
abstract: Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings were obtained from two different cortical areas as non-human primates performed a center-out reaching task in a virtual reality environment. Following a reach, animals maintained the end-point position of their arm under unimodal (proprioception only) and bimodal (proprioception and vision) conditions. In both areas, time domain and multi-taper spectral analysis methods were used to quantify changes in the spiking, local field potential (LFP), and spike-field coherence during arm-position maintenance. In both areas, individual neurons were classified based on the spectrum of their spiking patterns. A large proportion of cells in the SPL that exhibited sensory condition-specific oscillatory spiking in the beta (13-30Hz) frequency band. Cells in the IPL typically had a more diverse mix of oscillatory and refractory spiking patterns during the task in response to changing sensory condition. Contrary to the assumptions made in many modelling studies, none of the cells exhibited Poisson-spiking statistics in SPL or IPL. Evoked LFPs in both areas exhibited greater effects of target location than visual condition, though the evoked responses in the preferred reach direction were generally suppressed in the bimodal condition relative to the unimodal condition. Significant effects of target location on evoked responses were observed during the movement period of the task well. In the frequency domain, LFP power in both cortical areas was enhanced in the beta band during the position estimation epoch of the task, indicating that LFP beta oscillations may be important for maintaining the ongoing state. This was particularly evident at the population level, with clear increase in alpha and beta power. Differences in spectral power between conditions also became apparent at the population level, with power during bimodal trials being suppressed relative to unimodal. The spike-field coherence showed confounding results in both the SPL and IPL, with no clear correlation between incidence of beta oscillations and significant beta coherence. / Dissertation/Thesis / Doctoral Dissertation Biomedical Engineering 2017
16

Brain Mechanisms Underlying Integration of Optic Flow and Vestibular Cues to Self-motion / オプティカルフローと自己運動知覚に関する前庭情報の統合の神経基盤

Uesaki, Maiko 26 March 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(文学) / 甲第20828号 / 文博第758号 / 新制||文||655(附属図書館) / 京都大学大学院文学研究科行動文化学専攻 / (主査)教授 蘆田 宏, 教授 板倉 昭二, 教授 Anderson James Russell, 准教授 ALTMANN Christian / 学位規則第4条第1項該当 / Doctor of Letters / Kyoto University / DFAM
17

Multisensory Integration in Early Toddlerhood: Interrelationships with Context, SES and Expressive Vocabulary

Taylor, Caroline January 2021 (has links)
In the everyday environment, we receive information from various sensory inputs, and yet, we perceive and integrate the incoming information in a way that is meaningful. Remarkably, infants and toddlers are capable of sensory integration early in life. By integrating information, particularly speech, infants ultimately learn to reproduce language by late toddlerhood. These language skills form a foundation for learning and achievement later in life, and there is documented evidence that language skills vary by experiences related to socioeconomic status (SES). Language disparities can be measured early in development, and continue to divide throughout childhood. Although there is clear evidence that language learning trajectories are influenced by SES, less is known about multisensory integration (MSI) as they are measured here and how these skills may differ as a function of SES. Here, MSI was investigated to gain insight into the potential changes that occur in MSI and expressive vocabulary for 68 toddlers between 18 months and 24-months. Finally, this relationship was investigated in the context of SES. At 18-months, toddlers demonstrated significant matching for nonsocial conditions, and at 24-months toddlers also matched for low competition social trials, thus demonstrating an improvement in matching from 18 to 24-months. There were no significant relationships between MSI and expressive vocabulary, and only one unexpected relationship between MSI and SES. These findings extend the research from Bahrick and colleagues (2018) by supplementing the previously studied 12-month-olds and 2-5-year-olds with an earlier age (e.g., 18-months), and open new doors for studying toddlers’ emerging social MSI. / M.S. / In the everyday environment, we experience various sights and sounds from multiple sources, and yet, we perceive the incoming information in a way that is meaningful. Infants and toddlers are also capable of combining multiple sources of information together in a way that is beneficial for language learning. Merging sensory information (e.g., correctly matching their mother’s voice to their mother) creates a foundation for language learning. There is evidence that suggests language abilities differ as a result of socioeconomic status (SES), and can be found early in development and continue to progress into childhood. Although research indicates differences in language arise as a result of SES, it is unclear whether the ability to merge multiple sources of information (also known as multisensory integration), particularly while experiencing competing information (e.g., noise, multiple speakers) also differs as a result of SES. Here, the ability to integrate multiple sources of information and vocabulary in young toddlers ages 18-months and 24-months was studied to understand whether these skills progress with age and also whether they differ as a result of SES. 18-month-olds demonstrated better integration of sensory information when blocks were falling (e.g., nonsocial event) than when women were shown on the screen speaking in child-directed speech (e.g., social event). At 24- months, toddlers also correctly matched the information of the social event when there was no competing information on the screen, thus improving social integration from 18-months. There were no significant relationships between MSI and vocabulary, and only one relationship between MSI and SES. More research will need to be conducted to understand the improvement of social integration from 18 to 24-months, and more questions will need to be addressed on how SES may play a role in integrating information.
18

Signal compatibility as a modulatory factor for audiovisual multisensory integration

Parise, Cesare Valerio January 2013 (has links)
The physical properties of the distal stimuli activating our senses are often correlated in nature; it would therefore be advantageous to exploit such correlations to better process sensory information. Stimulus correlations can be contingent and readily available to the senses (like the temporal correlation between mouth movements and vocal sounds in speech), or can be the results of the statistical co-occurrence of certain stimulus properties that can be learnt over time (like the relation between the frequency of acoustic resonance and the size of the resonator). Over the last century, a large body of research on multisensory processing has demonstrated the existence of compatibility effects between individual features of stimuli from different sensory modalities. Such compatibility effects, termed crossmodal correspondences, possibly reflect the internalization of the natural correlation between stimulus properties. The present dissertation assesses the effects of crossmodal correspondences on multisensory processing and reports a series of experiments demonstrating that crossmodal correspondences influence the processing rate of sensory information, distort perceptual experiences and lead to stronger multisensory integration. Moreover, a final experiment investigating the effects of contingent signals’ correlation on multisensory processing demonstrates the key role of temporal correlation in inferring whether two signals have a common physical cause or not (i.e., the correspondence problem). A Bayesian framework is proposed to interpret the present results whereby stimulus correlations, represented on the prior distribution of expected crossmodal co-occurrence, operate as cues to solve the correspondence problem.
19

Facilitating visual target identification using non-visual cues

Ngo, Mary Kim January 2012 (has links)
The research presented in this thesis was designed to investigate whether and how the temporal synchrony and spatial congruence of non-visual cues with visual targets could work together to improve the discrimination and identification of visual targets in neurologically-healthy adult humans. The speed and accuracy of participants’ responses were compared following the presence or absence of temporally synchronous and/or spatially congruent or incongruent auditory, vibrotactile, and audiotactile cues in the context of dynamic visual search and rapidly-masked visual target identification. The understanding of the effects of auditory, vibrotactile, and audiotactile cues derived from these laboratory-based tasks was then applied to an air traffic control simulation involving the detection and resolution of potential conflicts (represented as visual targets amidst dynamic and cluttered visual stimuli). The results of the experiments reported in this thesis demonstrate that, in the laboratory-based setting, temporally synchronous and spatially informative non-visual cues both gave rise to significant improvements in participants’ performance, and the combination of temporal and spatial cuing gave rise to additional improvements in visual target identification performance. In the real-world setting, however, only the temporally synchronous unimodal auditory and bimodal audiotactile cues gave rise to a consistent facilitation of participants’ visual target detection performance. The mechanisms and accounts proposed to explain the effects of spatial and temporal cuing, namely multisensory integration and attention, are examined and discussed with respect to the observed improvements in participants’ visual target identification performance.
20

Étude psychophysique d'une illusion visuelle induite par le son

Éthier-Majcher, Catherine January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.

Page generated in 0.1384 seconds