Spelling suggestions: "subject:"intersensory effects"" "subject:"intersensor effects""
11 |
Multisensory processing in the human brainThesen, Thomas January 2005 (has links)
Perception has traditionally been studied as a modular function where different sensory systems operate as separate and independent modules. However, multisensory integration is essential for the perception of a coherent and unified representation of the external world that we experience phenomenologically. Mounting evidence suggests that the senses do not operate in isolation but that the brain processes and integrates information across modalities. A standing debate is at what level in the processing hierarchy the sensory streams converge, for example, if multisensory speech information converges first in higher-order polysensory areas such as STS and is then fed back to sensory areas, or if information is already integrated in primary and secondary sensory areas at the early stages of sensory processing. The studies in this thesis aim to investigate this question by focussing on the spatio-temporal aspects of multisensory processing, as well as investigating phonetic and non-phonetic integration in the human brain during auditory-visual speech perception.
|
12 |
Behavioral and functional neuroimaging investigations of odor imageryDjordjevic, Jelena January 2004 (has links)
The aim of this doctoral dissertation was to examine effects of olfactory imagery on other sensory and perceptual processes, and to explore brain areas involved in generation of olfactory mental images. Four studies, three behavioral and one functional neuroimaging (Positron Emission Tomography, or PET), were conducted, and healthy volunteers participated in all four studies. In Study 1, participants were better at detecting weak odors when they simultaneously imagined the same compared with a different odor as the one being detected. This effect of olfactory imagery was specific, as the request to imagine objects visually did not have any effect on detection of weak odors. In Studies 2 and 3, effects of presented and imagined odors on taste perception were compared. Effects of imagined odors were equivalent to the effects of presented odors when an objective measure of taste perception (detection of a weak tastant, Study 3) was used, and comparable but more limited when a subjective measure of taste perception (intensity ratings, Study 2) was used. In Study 4, PET technology was used to investigate changes in cerebral blood flow (CBF) associated with odor imagery. Participants were screened and selected for their odor imagery ability, using the behavioral paradigm developed in Study 1. Increased CBF associated with odor imagery was revealed in several areas relevant for olfaction: the left primary olfactory cortical region including piriform cortex, the left secondary olfactory cortical region (posterior orbitofrontal cortex), and the rostral insula bilaterally. Interestingly, increased activity in the primary olfactory cortex and the rostral insula was observed both in the odor imagery and the odor perception subtraction. Based on the obtained findings, I concluded that the effects of imagined odors on sensory processes are specific when compared with visual imagery, and similar to the effects of presented odors. Furthermore, the neural
|
13 |
Simultaneity constancy : unifying the senses across time /Harrar, Vanessa. January 2006 (has links)
Thesis (M.A.)--York University, 2006. Graduate Programme in Psychology. / Typescript. Includes bibliographical references (leaves 70-73). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://proquest.umi.com/pqdweb?index=0&did=1240699451&SrchMode=1&sid=15&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1195058296&clientId=5220
|
14 |
Synaesthesia : an essay in philosophical psychology /Gray, Richard Unknown Date (has links)
Thesis (Ph.D.)--University of Edinburgh, 2001.
|
15 |
Learning to match faces and voicesUnknown Date (has links)
This study examines whether forming a single identity is crucial to learning to bind faces and voices, or if people are equally able to do so without tying this information to an identity. To test this, individuals learned paired faces and voices that were in one of three different conditions: True voice, Gender Matched, or Gender Mismatched conditions. Performance was measured in a training phase as well as a test phase, and results show that participants were able to learn more quickly and have higher overall performance for learning in the True Voice and Gender Matched conditions. During the test phase, performance was almost at chance in the Gender Mismatched condition which may mean that learning in the training phase was simply memorization of the pairings for this condition. Results support the hypothesis that learning to bind faces and voices is a process that involves forming a supramodal identity from multisensory learning. / by Meredith Davidson. / Thesis (M.A.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
|
16 |
Infants' perception of synthetic-like multisensory relationsUnknown Date (has links)
Studies have shown that human infants can integrate the multisensory attributes of their world and, thus, have coherent perceptual experiences. Multisensory attributes can either specify non-arbitrary (e.g., amodal stimulus/event properties and typical relations) or arbitrary properties (e.g., visuospatial height and pitch). The goal of the current study was to expand on Walker et al.'s (2010) finding that 4-month-old infants looked longer at rising/falling objects when accompanied by rising/falling pitch than when accompanied by falling/rising pitch. We did so by conducting two experiments. In Experiment 1, our procedure matched Walker et al.'s (2010) single screen presentation while in Experiment 2 we used a multisensory paired-preference procedure. Additionally, we examined infants' responsiveness to these synesthetic-like events at multiple ages throughout development (four, six, and 12 months of age). ... In sum, our findings indicate that the ability to match changing visuospatial height with rising/falling pitch does not emerge until the end of the first year of life and throw into doubt Walker et al.'s (2010) claim that 4-month-old infants perceive audiovisual synesthetic relations in a manner similar to adults. / by Nicholas Minar. / Thesis (M.A.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
|
17 |
Spatiotemporal brain dynamics of the resting stateUnknown Date (has links)
Traditionally brain function is studied through measuring physiological responses in controlled sensory, motor, and cognitive paradigms. However, even at rest, in the absence of overt goal-directed behavior, collections of cortical regions consistently show temporally coherent activity. In humans, these resting state networks have been shown to greatly overlap with functional architectures present during consciously directed activity, which motivates the interpretation of rest activity as day dreaming, free association, stream of consciousness, and inner rehearsal. In monkeys, it has been shown though that similar coherent fluctuations are present during deep anesthesia when there is no consciousness. These coherent fluctuations have also been characterized on multiple temporal scales ranging from the fast frequency regimes, 1-100 Hz, commonly observed in EEG and MEG recordings, to the ultra-slow regimes, < 0.1 Hz, observed in the Blood Oxygen Level Dependent (BOLD) signal of functi onal magnetic resonance imaging (fMRI). However, the mechanism for their genesis and the origin of the ultra-slow frequency oscillations has not been well understood. Here, we show that comparable resting state networks emerge from a stability analysis of the network dynamics using biologically realistic primate brain connectivity, although anatomical information alone does not identify the network. We specifically demonstrate that noise and time delays via propagation along connecting fibres are essential for the emergence of the coherent fluctuations of the default network. The combination of anatomical structure and time delays creates a spacetime structure in which the neural noise enables the brain to explore various functional configurations representing its dynamic repertoire. / Using a simplified network model comprised of 3 nodes governed by the dynamics of FitzHugh-Nagumo (FHN) oscillators, we systematically study the role of time delay and coupling strength in the Using a simplified network model comprised of 3 nodes governed by the dynamics of FitzHugh-Nagumo (FHN) oscillators, we systematically study the role of time delay and coupling strength in the generation o f the slow coherent fluctuations. We find that these fluctuations in the BOLD signal are significantly correlated with the level of neural synchrony implicating that transient interareal synchronizations are the mechanism causing the emergence of the ultra slow coherent fluctuations in the BOLD signal. / by Young-Ah Rho. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
|
18 |
Exploiting audiovisual attention for visual codingUnknown Date (has links)
Perceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this work, we validate some of those physiological tests using complex video sequence. We designed and developed a web-based tool for video quality measurement. After conducting different experiments, we observed that in the general reaction time to detect video artifacts was higher when video was presented with the audio information. We observed that emotional information in audio guide human attention to particular ROI. We also observed that sound frequency change spatial frequency perception in still images. / by Freddy Torres. / Thesis (M.S.C.S.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
|
19 |
Multisensory Cues Facilitate Infants’ Ability to Discriminate Other-Race FacesUnknown Date (has links)
Our everyday world consists of people and objects that are usually specified by dynamic and concurrent auditory and visual attributes, which is known to increase perceptual salience and, therefore, facilitate learning and discrimination in infancy. Interestingly, early experience with faces and vocalizations has two seemingly opposite effects during the first year of life, 1) it enables infants to gradually acquire perceptual expertise for the faces and vocalizations of their own race and, 2) it narrows their ability to discriminate the faces of other-race faces (Kelly et al., 2007). It is not known whether multisensory redundancy might help older infants overcome the other-race effect reported in previous studies. The current project investigated infant discrimination of dynamic and vocalizing other-race faces in younger and older infants using habituation and eye-tracking methodologies. Experiment 1 examined 4-6 and 10-12-month-old infants' ability to discriminate either a native or non-native face articulating the syllable /a/. Results showed that both the 4-6- and the 10-12-month-olds successfully discriminated the faces,regardless of whether they were same- or other-race faces. Experiment 2 investigated the contribution of auditory speech cues by repeating Experiment 1 but in silence. Results showed that only the 10-12-month-olds tested with native-race faces successfully discriminated them. Experiment 3 investigated whether it was speech per se or sound in general that facilitated discrimination of the other-race faces in Experiment 1 by presenting a synchronous, computer-generated "boing" sound instead of audible speech cues. Results indicated that the 4-6-month olds discriminated both types of faces but that 10-12-month-olds only discriminated own-race faces. These results indicate that auditory cues, along with dynamic visual cues, can help infants overcome the effects of previously reported narrowing and facilitate discrimination of other-race static, silent faces. Critically, our results show that older infants can overcome the other race-effect when dynamic faces are accompanied by speech but not when they are accompanied by non- speech cues. Overall, a generalized auditory facilitation effect was found as a result of multisensory speech. Moreover, our findings suggest that infants' ability to process other- race faces following perceptual narrowing is more plastic than previously thought. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
|
20 |
The role of synesthetic correspondence in intersensory binding: investigating an unrecognized confound in multimodal perception researchOlsheski, Julia DeBlasio 13 January 2014 (has links)
The current program of research tests the following main hypotheses: 1) Synesthetic correspondence is an amodal property that serves to bind intersensory signals and manipulating this correspondence between pairs of audiovisual signals will affect performance on a temporal order judgment (TOJ) task; 2) Manipulating emphasis during a TOJ task from spatial to temporal aspects will strengthen the influence of task-irrelevant auditory signals; 3) The degree of dimensional overlap between audiovisual pairs will moderate the effect of synesthetic correspondence on the TOJ task; and 4) There are gaps in current perceptual theory due to the fact that synesthetic correspondence is a potential confound that has not been sufficiently considered in the design of perception research. The results support these main hypotheses. Finally, potential applications for the findings presented here are discussed.
|
Page generated in 0.089 seconds