• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 28
  • 13
  • 12
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 217
  • 77
  • 48
  • 32
  • 28
  • 27
  • 26
  • 25
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Multisensory Integration in Social and Nonsocial Events and Emerging Language in Toddlers

Bruce, Madeleine D. 12 1900 (has links)
Multisensory integration enables young children to combine information across their senses to create rich, coordinated perceptual experiences. Events with high intersensory redundancy across the senses provide salient experiences which aid in the integration process and facilitate perceptual learning. Thus, this study’s first objective was to evaluate if toddlers’ multisensory integration abilities generalize across social/nonsocial conditions, and if multisensory integration abilities predict 24-month-old’s language development. Additionally, previous research has not examined contextual factors, such as socioeconomic status or parenting behaviors, that may influence the development of multisensory integration skills. As such, this study’s second aim was to evaluate whether maternal sensitivity and SES moderate the proposed relationship between multisensory integration and language outcomes. Results indicated that toddlers’ multisensory integration abilities, F(1,33) = 4.191, p = .049, but not their general attention control skills, differed as a function of condition (social or nonsocial), and that social multisensory integration significantly predicted toddlers’ expressive vocabularies at 24-months old, β = .530, p = .007. However, no evidence was found to suggest that SES or maternal sensitivity moderated the detected relationship between multisensory integration abilities and language outcomes; rather, mothers’ maternal sensitivity scores directly predicted toddlers’ expressive language outcomes, β = .320, p = .044, in addition to their social multisensory integration skills. These findings suggest that at 24-months of age, both sensitive maternal behaviors and the ability to integrate social multisensory information are important to the development of early expressive language outcomes. / M. S. / Multisensory integration allows children to make sense of information received across their senses. Previous research has shown that events containing simultaneous and overlapping sensory information aid children in learning about objects. However, research has yet to evaluate whether children’s' multisensory integration abilities are related to language learning. Thus, this study’s first goal was to look at whether toddlers are equally skilled at integrating multisensory information in social and nonsocial contexts, and if multisensory integration skills are related to toddlers' language skills. This study’s second goal was to examine whether parenting behaviors and/or familial access to resources (i.e., socioeconomic status) play a role in the hypothesized relationship between multisensory integration and language in toddlerhood. Results indicated that toddlers show better multisensory integration abilities when viewing social as opposed to nonsocial sensory information, and that social multisensory integration skills were significantly related to their language skills. Also, maternal parenting behaviors, but not socioeconomic status, were significantly related to toddlers' language abilities. These findings suggest that at 24-months of age, both sensitive maternal parenting and the ability to integrate social multisensory information are important to the development of language in toddlerhood.
22

Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration.

Roach, N.W., Heron, James, McGraw, Paul V. January 2006 (has links)
No / In order to maintain a coherent, unified percept of the external environment, the brain must continuously combine information encoded by our different sensory systems. Contemporary models suggest that multisensory integration produces a weighted average of sensory estimates, where the contribution of each system to the ultimate multisensory percept is governed by the relative reliability of the information it provides (maximum-likelihood estimation). In the present study, we investigate interactions between auditory and visual rate perception, where observers are required to make judgments in one modality while ignoring conflicting rate information presented in the other. We show a gradual transition between partial cue integration and complete cue segregation with increasing inter-modal discrepancy that is inconsistent with mandatory implementation of maximum-likelihood estimation. To explain these findings, we implement a simple Bayesian model of integration that is also able to predict observer performance with novel stimuli. The model assumes that the brain takes into account prior knowledge about the correspondence between auditory and visual rate signals, when determining the degree of integration to implement. This provides a strategy for balancing the benefits accrued by integrating sensory estimates arising from a common source, against the costs of conflating information relating to independent objects or events.
23

Multi-Sensory Stimulation Environments For Use With Dementia Patients: Staff Perspectives On Reduction Of Agitation And Negative Behaviors

Houston, Megan 01 January 2015 (has links)
Background: Dementia is a degenerative neurological disorder that afflicts a growing proportion of the global population. Complementary alternative medicine (CAM) modalities are under investigation for their therapeutic value in the management of dementia. Purpose: Nursing care of dementia sufferers can include managing agitation and negative behaviors; this study investigates staff appraisal of the Multi-Sensory Stimulation Environment (MSSE) as an intervention for these nursing challenges. Methods: A purposive sample of nursing staff employed in residential care for dementia patients were recruited 10 weeks after the initiation of an open-access MSSE at the facility to complete a confidential self-administered questionnaire. Results: 79% of potential participants returned completed surveys for a total sample of n = 23. 70% of survey respondents felt that residents were utilizing the MSSE "Somewhat Frequently" or "Very Frequently." 77% of the staff felt the MSSE should continue in use at the facility or continue with some alterations. The sample suggested that the MSSE is helpful for mood, specifically anger, sadness, anxiety, and restlessness, but not for boredom. Higher-scoring items in favor of the MSSE intervention included confusion, perseverating, wandering, and interpersonal conflict. Conclusion: Several components of agitation and negative behavior in the dementia population appear to be improved with the use of an MSSE according to this sample. Further research is needed to support the results of this sample and to explore more detailed recommendations regarding the use of MSSE in dementia care.
24

Non-auditory Influences on the Auditory Periphery

Gruters, Kurtis G. January 2016 (has links)
<p>Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.</p> / Dissertation
25

The facilitatory crossmodal effect of auditory stimuli on visual perception

Chen, Yi-Chuan January 2011 (has links)
The aim of the experiments reported in this thesis was to investigate the multisensory interactions taking place between vision and audition. The focus is on the modulatory role of the temporal coincidence and semantic congruency of pairs of auditory and visual stimuli. With regards to the temporal coincidence factor, whether, and how, the presentation of a simultaneous sound facilitates visual target perception was tested using the equivalent noise paradigm (Chapter 3) and the backward masking paradigm (Chapter 4). The results demonstrate that crossmodal facilitation can be observed in both visual detection and identification tasks. Importantly, however, the results also reveal that the sound not only had to be presented simultaneously, but also reliably, with the visual target. The suggestion is made that the reliable co-occurrence of the auditory and visual stimuli provides observers with the statistical regularity needed to assume that the visual and auditory stimuli likely originate from the same perceptual event (i.e., that they in some sense 'belong together'). The experiments reported in Chapters 5 through 8 were designed to investigate the role of semantic congruency on audiovisual interactions. The results of the experiments reported in Chapter 5 revealed that the semantic context provided by the soundtrack that a person happens to be listening to can modulate his/her visual conscious perception in the binocular rivalry situation. In Chapters 6-8, the timecourse of audiovisual semantic interactions were investigated using categorization, detection, and identification tasks on visual pictures. The results suggested that when the presentation of the sound leads the presentation of a picture by more than 240 ms, it induces a crossmodal semantic priming effect. In addition, when the presentation of the sound lags a semantically-congruent picture by about 300 ms, it enhances performance, presumably by helping to maintain the visual representation in short-term memory. The results indicate that audiovisual semantic interactions constitute a heterogeneous group of phenomena. A crossmodal type-token binding framework is proposed to account for the parallel processing of the spatiotemporal and semantic interactions of multisensory inputs. The suggestion is that the congruent information in the type and token representation systems would integrate, and they finally bind into a unified multisensory object representation.
26

Multisensory Input to the Lateral Rostral Suprasylvian Sulcus (LRSS) in Ferret

Hagood, Elizabeth 21 April 2009 (has links)
For the brain to construct a comprehensive percept of the sensory world, information from the different senses must converge onto individual neurons within the central nervous system. As a consequence, how these neurons convert convergent sensory input into multisensory information is an important question facing neuroscience today. Recent physiological studies have demonstrated the presence of a robust population of multisensory neurons in the lateral bank of the rostral suprasylvian sulcus (LRSS) in adult ferret (Keniston et al, 2008). The LRSS is a region situated between somatosensory and auditory cortices, where bimodal (somatosensory-auditory) neurons occupy the greatest percentage of the sensory-responsive cell population. The present study was designed to evaluate the anatomical connections that underlie these multisensory features. Injections of neuroanatomical tracer were first made into the LRSS. After transport and histological processing, microscopy revealed retrogradely-labeled cell bodies in identified regions of cortex and thalamus. The resultant analysis showed that the greatest number of projections to LRSS originated in auditory and somatosensory cortex. Of these, auditory cortex contributed a greater proportion of inputs. These anatomical data support the idea that LRSS is a multisensory cortex that receives primarily bimodal input from auditory and somatosensory sources.
27

Dendritic Spine Density Varies Between Unisensory and Multisensory Cortical Regions

Bajwa, Moazzum 07 May 2010 (has links)
In the brain, the dendritic spine is a point of information exchange that extends the neuronal surface on which synapses occur, as well as facilitates and stabilizes those contacts. Furthermore, dendritic spines dynamically change in shape and number in response to a variety of factors. Dendritic spine numbers are reduced in mental retardation, enhanced during development, sensory enrichment or physical exercise, or fluctuate during the reproductive cycle. Thus, for a given neuron type, it might be expected that dendritic spine number might achieve a dynamic optimum. Indeed, many studies of spine density of pyramidal neurons in sensory cortex indicate that an average of ~1.4 spines/micron occurs is present (Briner et al., 2010). Most such studies examined dendritic spines from primary sensory areas which are dominated by inputs from a single sensory modality. However, there are a large number of neural regions that receive inputs from more than one sensory modality and it is hypothesized that spine density should increase to accommodate these additional inputs. To test this hypothesis, the present experiments used Golgi-Cox stained layer 2-3 pyramidal neurons from ferret primary somatosensory (S1) and auditory (A1) cortical regions, as well as from the higher-level rostral posterior parietal (PPr) and lateral rostral suprasylvian (LRSS) multisensory areas. Spine densities in S1 (avg 1.309 ± 0.247 spines/micron) and A1 (avg 1.343 ± 0.273 spines/micron) were measured to be significantly greater (p<0.05, t-test) than those observed in multisensory regions PPr (avg 1.242 ± 0.205 spines/micron) or LRSS (avg 1.099 ± 0.217 spines/micron). These results also indicate that spine densities are greater in primary (S1, A1) than in higher-level (PPr, LRSS) sensory areas. The functional consequences of such unexpected findings are discussed in light of potential biophysical differences between unisensory and multisensory neurons.
28

THE CONTRIBUTION OF THE MONTESSORI APPROACH TO MULTISENSORY APPROACHES TO EARLY LEARNING DISABILITIES

Jamieson, Natalie, Yolande 26 October 2006 (has links)
Faculty of Humanities School of Education 9805090w NATALIE@WBS.CO.ZA / Learning disabilities have become of increasing concern for educators. More and more children are having difficulty learning to read and write. This dissertation investigates what constitutes a learning disability, its etiology and whether or not it is possible to identify these disabilities in early childhood. The investigation further aims to discover if these learning disabilities are comprised of sub-disabilities and if these can be identified as such. To this end the research aims to determine the most appropriate remedial intervention strategies used for learning disabilities. Multisensory intervention is therefore explored. On the basis of this the Montessori Method is examined to ascertain whether or not the method can contribute to multisensory intervention at the preschool level. It is argued that the Montessori Method is admirably suited to making such a contribution. Further empirical research for these claims is indicated.
29

Crossmodal coupling of oculomotor controland spatial attention in vision and audition

Rolfs, Martin, Engbert, Ralf, Kliegl, Reinhold January 2005 (has links)
Fixational eye movements occur involuntarily during visual fixation of stationary scenes. The fastest components of these miniature eye movements are microsaccades, which can be observed about once per second. Recent studies demonstrated that microsaccades are linked to covert shifts of visual attention [e.g., Engbert & Kliegl (2003), Vision Res 43:1035-1045]. Here,we generalized this finding in two ways. First, we used peripheral cues, rather than the centrally presented cues of earlier studies. Second, we spatially cued attention in vision and audition to visual and auditory targets. An analysis of microsaccade responses revealed an equivalent impact of visual and auditory cues on microsaccade-rate signature (i.e., an initial inhibition followed by an overshoot and a final return to the pre-cue baseline rate). With visual cues or visual targets,microsaccades were briefly aligned with cue direction and then opposite to cue direction during the overshoot epoch, probably as a result of an inhibition of an automatic saccade to the peripheral cue. With left auditory cues and auditory targets microsaccades oriented in cue direction. Thus, microsaccades can be used to study crossmodal integration of sensory information and to map the time course of saccade preparation during covert shifts of visual and auditory attention.
30

Human Olfactory Perception: Characteristics, Mechanisms and Functions

Chen, Jennifer 16 September 2013 (has links)
Olfactory sensing is ubiquitous across animals and important for survival. Yet, its characteristics, mechanisms, and functions in humans remain not well understood. In this dissertation, I present four studies on human olfactory perception. Study I investigates the impact of short-term exposures to an odorant on long-term olfactory learning and habituation, while Study II examines human ability to localize smells; Study III probes visual-olfactory integration of object representations, and Study IV explores the role of olfaction in sensing nutrients. Several conclusions are drawn from these studies. First, brief intermittent exposures to even a barely detectable odorant lead to long-term incremental odorant-specific habituation. Second, humans localize smells based on gradient cues between the nostrils. Third, there is a within-hemispheric advantage in the integration of visual-olfactory object representations. Fourth, olfaction partakes in nutrient-sensing and facilitates the detection of food. Some broader implications of our findings are discussed.

Page generated in 0.068 seconds