• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 12
  • 10
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Investigation of Cross-Modal Transfer across Visual and Tactile Sensory Modalities in Children with Autism

Doherty, Meghan Michelle 01 May 2017 (has links)
In the present study, two children diagnosed with autism spectrum disorder were taught to identify reflexive relations across three varying stimuli using procedures outlined in the Promoting the Emergence of Advanced Knowledge Equivalence Module (PEAK-E). Two programs from the PEAK-E module were utilized, programs 2B and 3C, both of which incorporated reflexive relations utilizing two differing sensory modalities. Visual relations were directly trained to the participants while the tactile relations were derived and monitored through probes. The same three stimuli were utilized in both PEAK-E programs for each participant; however, those three stimuli varied across participants. All stimuli were retrieved from the participants’ environments and were familiar objects to the participants. The results indicate that only one sense mode, visual, required corrective feedback and praise in order for cross-modal transfer to occur for the second sense mode, tactile. Both participants demonstrated they acquired the reflexive skills for both visual and tactile stimuli. Participant 1 reached mastery criterion for both skills in 36 trials, and participant 2 reached mastery criterion within 20 trials. Limitations and future directions for implication of cross modal transfer are discussed.
12

Cross-modal cue effects in psychophysics, fMRI, and MEG in motion perception

Hanada, Grant Masata January 2012 (has links)
Thesis (M.Sc.Eng.) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Motion perception is critical to navigation within the environment and has been studied primarily in the unisensory visual domain. However, the real world is not unisensory, but contains motion information from several modalities. With the billions of sensory stimuli our brains receive every second, many complex processes must be executed in order to properly filter relevant motion related information. In transparent motion, when there are more than one velocity fields within the same visual space, our brains must be able to separate out conflicting forms of motion utilizing environmental cues. But even in unimodal visual situations, one often uses information from other modalities for guidance. We studied this phenomenon in psychophysics with cross-modal (visual and auditory) cues and their role in detecting transparent motion. To further examine these ideas, using a single subject we explored the spatiotemporal characteristics of the neural substrates involved in utilizing these different cues in motion detection during magnetoencephalography (MEG). Another dimension of motion perception is involved when the observer is moving and, therefore, must deal with self-motion and changing environmental cues. To better understand this idea we used a visual search psychophysical task that has been well studied in our lab to determine whether subjects use a simple relative-motion computation to detect moving objects during self-motion or whether they utilize scene context when detecting object motion and how this might change when given a cross-modal auditory cue. To find the spatiotemporal neural characteristics involved in this process, functional magnetic resonance imaging (fMRI) and MEG were performed separately in elderly subjects (healthy and a stroke patient) and compared with previous studies of young healthy subjects doing the same task. / 2031-01-01
13

Modulation of colour and odour perception, and cross-modal correspondences for women in the menstrual cycle and menopause / 月経サイクルと閉経における色とにおいの知覚と多感覚の調整

Iriguchi, Mayuko 25 March 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21611号 / 理博第4518号 / 新制||理||1648(附属図書館) / 京都大学大学院理学研究科生物科学専攻 / (主査)教授 正高 信男, 准教授 後藤 幸織, 教授 髙井 正成 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
14

Can gaze-cueing be helpful for detecting sound in autism spectrum disorder? / 自閉症スペクトラムにおいて視線手掛かりは聴覚的注意を促進するだろうか?

Zhao, Shuo 24 March 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(人間健康科学) / 甲第18203号 / 人健博第20号 / 新制||人健||2(附属図書館) / 31061 / 京都大学大学院医学研究科人間健康科学系専攻 / (主査)教授 三谷 章, 教授 精山 明敏, 教授 髙橋 良輔 / 学位規則第4条第1項該当 / Doctor of Human Health Sciences / Kyoto University / DFAM
15

A Comparison Of Attentional Reserve Capacity Across Three Sensory Modalities

Brill, John 01 January 2007 (has links)
There are two theoretical approaches to the nature of attentional resources. One proposes a single, flexible pool of cognitive resources; the other poses there are multiple resources. This study was designed to systematically examine whether there is evidence for multiple resource theory using a counting task consisting of visual, auditory, and tactile signals using two experiments. The goal of the first experiment was the validation of a multi-modal secondary loading task. Thirty-two participants performed nine variations of a multi-modal counting task incorporating three modalities and three demand levels. Performance and subjective ratings of workload were measured for each of the nine conditions of the within-subjects design. Significant differences were found on the basis of task demand level, irrespective of modality. Moreover, the perceived workload associated with the tasks differed by task demand level and not by modality. These results suggest the counting task is a valid means of imposing task demands across multiple modalities. The second experiment used the same counting task as a secondary load to a primary visual monitoring task, the system monitoring component of the Multi-Attribute Task Battery (MATB). The experimental conditions consisted of performing the system monitoring task alone as a reference and performing system monitoring combined with visual, auditory, or tactile counting. Thirty-one participants were exposed to all four experimental conditions in a within-subjects design. Performance on the primary and secondary tasks was measured, and subjective workload was assessed for each condition. Participants were instructed to maintain performance on the primary task, irrespective of condition, which they did so effectively. Secondary task performance for the visual-auditory and visual-tactile conditions was significantly better than for the visual-visual dual task condition. Subjective workload ratings were also consistent with the performance measures. These results clearly indicate that there is less interference for cross-modal tasks than for intramodal tasks. These results add evidence to multiple resource theory. Finally, these results have practical implications that include human performance assessment for display and alarm development, assessment of attentional reserve capacity for adaptive automation systems, and training.
16

IMPOSSIBLE ART: SYNESTHESIA, SENSORY MIMESIS, AND THE EMERGENCE OF CROSS-MODAL WORKS OF MODERN ART AND LITERATURE

Loh, Vanessa 08 1900 (has links)
This dissertation investigates the turn of the century fascination with synesthesia and efforts by Modernist artists and writers to produce cross-modal works that attempt to defy sensory boundaries. Works of impossible art are artistic and literary experiments with style and form that develop out of the realism and naturalism of the nineteenth century, to be sure; they are also conceived of by their creators as scientific experiments that test what is possible at the limits of perception. Accordingly, while my work is situated within the field of aesthetics, I take a neuroscientific approach to aid in understanding the modes of perception these works are attempting to explore. My project applies the findings of recent neuroscientific studies into clinical synesthesia as a guide for thinking about these Modernist works. The methodology of neuro-aesthetics allows me to develop a theory of sensory mimesis. Sensory mimesis is a holistic approach to explaining phenomenological experience that depends on a sensory semantics, more fundamental and more comprehensive than a linguistic semantics, that I propose filters our access to the world. What we ultimately learn from impossible art is that the range of neurodiversity in humans is broader than we tend acknowledge or appreciate. The notoriously indefinable and uncategorizable character of queer theory is an applicable framework to match the innumerable neurocognitive possibilities that are actually available. To this end, my dissertation suggests that a shift to a neuro-queer-aesthetic paradigm would not only expand human perceptive possibilities, but also enable compassionate engagement within and among our diverse communities. / English
17

Cognitive cross-modal integration in a wolf spider, Schizocosa ocreata (Hentz) (Lycosidae)

Kozak, Elizabeth C. 15 October 2015 (has links)
No description available.
18

Pictures, Pantomimes, and a Thousand Words: The Neuroscience of Cross-Modal Narrative Communication in Humans

Yuan, Ye 11 1900 (has links)
Communication is the exchange of thoughts and ideas from one person to another, often through the form of narratives. People communicate using speech, gesture, and drawing, or some multimodal combination of the three. Although there has been much research on how we understand and produce speech and pantomimes, there is relatively little on drawing, and even less on cross-modal communication. This dissertation presents novel empirical findings that contribute to a better understanding of the brain areas that mediate narrative communication across speech, pantomime, and drawing. Since the neuroscience of drawing was so understudied, I first used functional magnetic resonance imaging (fMRI) to investigate the existence of a basic drawing network in the human brain (Chapter 2). The drawing network was shown to contain three visual-motion areas that process the emanation of the visual image as drawing occurs. Next, to follow up on the poorly-characterized structural connectivity of these areas in the human dorsal visual stream, I used diffusion imaging to explore how these dorsal stream areas are connected (Chapter 3). The tractography results showed structural connectivity for two of the three predicted branches connecting the three visual-motion areas. Finally, I used fMRI to investigate how the basic drawing network is recruited during the more complex task of narrative drawing, and to find common brain areas among narrative speech, pantomime, and drawing (Chapter 4). Results suggest that people approached narratives in an intrinsically mentalistic fashion in terms of the protagonist, rather than as a mere description of action sequences. Together, these studies advance our understanding of the brain areas that comprise a basic drawing network, how these brain areas are interconnected, and how we communicate stories across three modalities of production. I conclude with a general discussion of my findings (Chapter 5). / Thesis / Doctor of Philosophy (PhD)
19

Adaptation minimizes distance-related audiovisual delays

Heron, James, Whitaker, David J., McGraw, Paul V., Horoshenkov, Kirill V. January 2007 (has links)
No / A controversial hypothesis within the domain of sensory research is that observers are able to use visual and auditory distance cues to maintain perceptual synchrony - despite the differential velocities of light and sound. Here we show that observers are categorically unable to utilize such distance cues. Nevertheless, given a period of adaptation to the naturally occurring audiovisual asynchrony associated with each viewing distance, a temporal recalibration mechanism helps to perceptually compensate for the effects of distance-induced auditory delays. These effects demonstrate a novel functionality of temporal recalibration with clear ecological benefits.
20

Modeling Synergistic Relationships Between Words and Images

Leong, Chee Wee 12 1900 (has links)
Texts and images provide alternative, yet orthogonal views of the same underlying cognitive concept. By uncovering synergistic, semantic relationships that exist between words and images, I am working to develop novel techniques that can help improve tasks in natural language processing, as well as effective models for text-to-image synthesis, image retrieval, and automatic image annotation. Specifically, in my dissertation, I will explore the interoperability of features between language and vision tasks. In the first part, I will show how it is possible to apply features generated using evidence gathered from text corpora to solve the image annotation problem in computer vision, without the use of any visual information. In the second part, I will address research in the reverse direction, and show how visual cues can be used to improve tasks in natural language processing. Importantly, I propose a novel metric to estimate the similarity of words by comparing the visual similarity of concepts invoked by these words, and show that it can be used further to advance the state-of-the-art methods that employ corpus-based and knowledge-based semantic similarity measures. Finally, I attempt to construct a joint semantic space connecting words with images, and synthesize an evaluation framework to quantify cross-modal semantic relationships that exist between arbitrary pairs of words and images. I study the effectiveness of unsupervised, corpus-based approaches to automatically derive the semantic relatedness between words and images, and perform empirical evaluations by measuring its correlation with human annotators.

Page generated in 0.2868 seconds