Spelling suggestions: "subject:"crossmodal"" "subject:"crossmodale""
1 |
Factors that Influence Short-term Learning of Visual-Tactile Associations: An Investigation of Behavioural Performance and the Associated Electrophysiological MechanismsMackay, Michelle January 2009 (has links)
Neuroplasticity is a mechanism whereby the brain changes its configuration and function through experience. Short-term learning (i.e. minutes to hours) is associated with early phases of neuroplasticity whereby the cortical responses increase to common stimuli, and underlies long-term learning (i.e. days to weeks). Tactile sensation is an important sense, therefore if it became compromised it would be valuable to have an understanding of the neural mechanisms that underlie tactile short-term learning, and other means to promote learning, such as the introduction of a second modality. Having more knowledge in the area of somatosensory learning could then provide the means leading to long-term learning and potential recovery of function after brain injury such as stroke. The focus of this thesis was to research the role of visual information on short-term somatosensory learning, and to understand the electrophysiological mechanisms that are associated with this modulation of learning within a single testing session.
The methodology consisted of learning Morse code tactile patterns corresponding to English letters, and was broken up into two experiments. The objective of the first experiment was to determine the functional benefit to performance of the temporal and spatial coupling of tactile and visual stimuli, and the second experiment was used to determine the electrophysiological mechanisms associated with the modulation of somatosensory processing by visual stimulation. Given that there is a quantifiable measurement of learning, we hypothesized that tactile-visual cross-modal coupling will increase the learning outcome and provide functional benefit. It has been shown (Eimer et al., 2001) that presenting a visual stimulus within the same spatial site as the corresponding tactile stimulus will enhance the measurable components, and better the behavioural performance (Ohara et al., 2006). The current results demonstrated that visual-tactile cross-modal association can have a positive effect on learning over a short period of time, and that presenting a visual stimulus prior to a tactile stimulus may be beneficial to performance during the early stages of learning. Also, the results from the second experiment demonstrated an elevated and prolonged tactile P100, and a noticeably absent N140 component when tactile information was presented before visual information. Further research, extending from this thesis, is needed to advance understanding of the performance and electrophysiological outcomes of visual-tactile cross-modal associations. The findings of this study give insight into the performance and electrophysiological effects involved with short-term somatosensory learning, specifically how the manipulation of a visual stimulus, both spatially and temporally, can affect tactile learning as indicated through behavioural performance, and affect the electrophysiological mechanisms involved with somatosensory processing.
|
2 |
Factors that Influence Short-term Learning of Visual-Tactile Associations: An Investigation of Behavioural Performance and the Associated Electrophysiological MechanismsMackay, Michelle January 2009 (has links)
Neuroplasticity is a mechanism whereby the brain changes its configuration and function through experience. Short-term learning (i.e. minutes to hours) is associated with early phases of neuroplasticity whereby the cortical responses increase to common stimuli, and underlies long-term learning (i.e. days to weeks). Tactile sensation is an important sense, therefore if it became compromised it would be valuable to have an understanding of the neural mechanisms that underlie tactile short-term learning, and other means to promote learning, such as the introduction of a second modality. Having more knowledge in the area of somatosensory learning could then provide the means leading to long-term learning and potential recovery of function after brain injury such as stroke. The focus of this thesis was to research the role of visual information on short-term somatosensory learning, and to understand the electrophysiological mechanisms that are associated with this modulation of learning within a single testing session.
The methodology consisted of learning Morse code tactile patterns corresponding to English letters, and was broken up into two experiments. The objective of the first experiment was to determine the functional benefit to performance of the temporal and spatial coupling of tactile and visual stimuli, and the second experiment was used to determine the electrophysiological mechanisms associated with the modulation of somatosensory processing by visual stimulation. Given that there is a quantifiable measurement of learning, we hypothesized that tactile-visual cross-modal coupling will increase the learning outcome and provide functional benefit. It has been shown (Eimer et al., 2001) that presenting a visual stimulus within the same spatial site as the corresponding tactile stimulus will enhance the measurable components, and better the behavioural performance (Ohara et al., 2006). The current results demonstrated that visual-tactile cross-modal association can have a positive effect on learning over a short period of time, and that presenting a visual stimulus prior to a tactile stimulus may be beneficial to performance during the early stages of learning. Also, the results from the second experiment demonstrated an elevated and prolonged tactile P100, and a noticeably absent N140 component when tactile information was presented before visual information. Further research, extending from this thesis, is needed to advance understanding of the performance and electrophysiological outcomes of visual-tactile cross-modal associations. The findings of this study give insight into the performance and electrophysiological effects involved with short-term somatosensory learning, specifically how the manipulation of a visual stimulus, both spatially and temporally, can affect tactile learning as indicated through behavioural performance, and affect the electrophysiological mechanisms involved with somatosensory processing.
|
3 |
CROSS-MODAL EFFECTS OF DAMAGE ON MECHANICAL BEHAVIOR OF HUMAN CORTICAL BONEJoo, Won January 2006 (has links)
No description available.
|
4 |
Audio-visual interactions in manual and saccadic responsesMakovac, Elena January 2013 (has links)
Chapter 1 introduces the notions of multisensory integration (the binding of information coming from different modalities into a unitary percept) and multisensory response enhancement (the improvement of the response to multisensory stimuli, relative to the response to the most efficient unisensory stimulus), as well as the general goal of the present thesis, which is to investigate different aspects of the multisensory integration of auditory and visual stimuli in manual and saccadic responses. The subsequent chapters report experimental evidence of different factors affecting the multisensory response: spatial discrepancy, stimulus salience, congruency between cross-modal attributes, and the inhibitory influence of concurring distractors. Chapter 2 reports three experiments on the role of the superior colliculus (SC) in multisensory integration. In order to achieve this, the absence of S-cone input to the SC has been exploited, following the method introduced by Sumner, Adamjee, and Mollon (2002). I found evidence that the spatial rule of multisensory integration (Meredith & Stein, 1983) applies only to SC-effective (luminance-channel) stimuli, and does not apply to SC-ineffective (S-cone) stimuli. The same results were obtained with an alternative method for the creation of S-cone stimuli: the tritanopic technique (Cavanagh, MacLeod, & Anstis, 1987; Stiles, 1959; Wald, 1966). In both cases significant multisensory response enhancements were obtained using a focused attention paradigm, in which the participants had to focus their attention on the visual modality and to inhibit responses to auditory stimuli. Chapter 3 reports two experiments showing the influence of shape congruency between auditory and visual stimuli on multisensory integration; i.e. the correspondence between structural aspects of visual and auditory stimuli (e.g., spiky shape and “spiky” sounds). Detection of audio-visual events was faster for congruent than incongruent pairs, and this congruency effect occurred also in a focused attention task, where participants were required to respond only to visual targets and could ignore irrelevant auditory stimuli. This particular type of cross-modal congruency was been evaluated in relation to the inverse effectiveness rule of multisensory integration (Meredith & Stein, 1983). In Chapter 4, the locus of the cross-modal shape congruency was evaluated applying the race model analysis (Miller, 1982). The results showed that the violation of the model is stronger for some congruent pairings in comparison to incongruent pairings. Evidence of multisensory depression was found for some pairs of incongruent stimuli. These data imply a perceptual locus for the cross-modal shape congruency effect. Moreover, it is evident that multisensoriality does not always induce an enhancement, and in some cases, when the attributes of the stimuli are particularly incompatible, a unisensory response may be more effective that the multisensory one. Chapter 5 reports experiments centred on saccadic generation mechanisms. Specifically, the multisensoriality of the saccadic inhibition (SI; Reingold&Stampe, 2002) phenomenon is investigated. Saccadic inhibition refers to a characteristic inhibitory dip in saccadic frequency beginning 60-70 ms after onset of a distractor. The very short latency of SI suggests that the distractor interferes directly with subcortical target selection processes in the SC. The impact of multisensory stimulation on SI was studied in four experiments. In Experiments 7 and 8, a visual target was presented with a concurrent audio, visual or audio-visual distractor. Multisensory audio-visual distractors induced stronger SI than did unisensory distractors, but there was no evidence of multisensory integration (as assessed by a race model analysis). In Experiments 9 and 10, visual, auditory or audio-visual targets were accompanied by a visual distractor. When there was no distractor, multisensory integration was observed for multisensory targets. However, this multisensory integration effect disappeared in the presence of a visual distractor. As a general conclusion, the results from Chapter 5 results indicate that multisensory integration occurs for target stimuli, but not for distracting stimuli, and that the process of audio-visual integration is itself sensitive to disruption by distractors.
|
5 |
Investigating the effects of visual deprivation on subcortical and cortical structures using functional MRI and MR spectroscopyCoullon, Gaelle Simone Louise January 2015 (has links)
Visual deprivation in early life causes widespread changes to the visual pathway. Structures normally dedicated to vision can be recruited for processing of the remaining senses (i.e. audition). This thesis used magnetic resonance imaging to explore how the 'visual' pathway reorganises in congenital bilateral anophthalmia, a condition where individuals are born without eyes. Anophthalmia provides a unique model of complete deprivation, since the ‘visual’ pathway has not experienced pre- or post-natal visual input. Firstly, this thesis explored reorganisation of the anophthalmic 'visual' pathway for auditory processing, from subcortical structures responding to basic sounds (Chapters 3 and 4), to higher-order occipital areas extracting meaning from speech sounds (Chapter 7). Secondly, this thesis looked to better understand the neurochemical, neuroanatomical and behavioural changes that accompany reorganisation in anophthalmia (Chapters 5 and 6). Finally, this thesis investigated whether similar changes can take place in the sighted brain after a short period of visual deprivation (Chapter 8). The experiments in this thesis provide some evidence that the lack of pre-natal visual experiences affects cross-modal reorganisation. Chapter 4 describes a unique subcortico-cortical route for auditory input in anophthalmia. Furthermore, Chapter 7 suggests that hierarchical processing of sensory information in the occipital cortex is maintained in anophthalmia, which may not be the case in congenital or early-onset blindness. However, this thesis also suggests that some reorganisation thought to be limited to anophthalmia can be found in early-onset blindness, for example with the subcortical functional changes described in Chapter 3. In addition, neurochemical, neuroanatomical and behavioural changes described in Chapters 5 and 6 are comparable to those reported in early-onset blindness, therefore demonstrating important similarities between these populations. Finally, this thesis describes how some of these functional and behavioural changes can also take place in sighted subjects after a short period of blindfolding, although this effect is extremely variable across subjects (Chapter 8). The thesis concludes by highlighting the considerable contribution of individual differences in studies of cross-modal reorganisation, and emphasises the need for larger more homogenous groups when investigating subcortical and cortical plasticity in the absence of visual input.
|
6 |
Prosody and on-line parsingSchepman, Astrid Helena Baltina Catherina January 1997 (has links)
No description available.
|
7 |
Reconhecimento de emoções em cães domésticos (Canis familiaris): percepção de pistas faciais e auditivas na comunicação intra e interespecífica / Emotion recognition in domestic dogs (Canis familiaris): perception of facial and auditory cues in intra and interspecific communicationAlbuquerque, Natalia de Souza 14 October 2013 (has links)
Cães domésticos (Canis familiaris) são animais sociais que apresentam uma série de habilidades cognitivas para interagir com outros cães e com pessoas. Apesar de muitos estudos com cães terem investigado o uso de pistas comunicativas, a sensibilidade a estados de atenção, a capacidade de discriminação de faces e de vocalizações e até o processamento de expressões faciais, ainda não existem evidências de que esses animais são capazes de obter e utilizar simultaneamente informações emocionais de expressões faciais e auditivas. O reconhecimento de estados emocionais pode ser entendido como uma característica adaptativa, uma vez que possui um papel muito importante no contexto social e pode ser crucial para o estabelecimento e manutenção de relacionamentos em longo prazo. Interessados em investigar as habilidades de leitura e compreensão de emoções, utilizamos um paradigma de preferência de olhar para testar cães de família de várias raças em sua habilidade de reconhecer emoções de maneira cross-modal. Analisamos o comportamento visual espontâneo dos sujeitos frente a dois estímulos visuais (mesmo indivíduo, expressão facial diferente) e um som (vocalização) congruente a uma das duas imagens. Utilizamos estímulos caninos e humanos, de fêmeas e machos, com valência positiva e negativa, apresentados do lado esquerdo e do lado direito, e avaliamos seus possíveis efeitos sobre o desempenho dos animais. A variável utilizada para as análises foi o índice de congruência: a proporção de tempo de olhar para a imagem congruente em relação ao tempo total de olhar para as telas. Os cães demonstraram ser capazes de associar informações das faces (fotografias) e das vocalizações (playbacks) e integrá-las em uma única representação mental, independente da espécie, do sexo, da valência e do lado de apresentação do estímulo. O único efeito que encontramos foi o de espécie: apesar dos sujeitos apresentarem a habilidade de reconhecimento tanto para estímulos caninos quanto para humanos, o fizeram de maneira mais robusta para coespecíficos. Isto pode sugerir que a habilidade de reconhecer emoções de maneira cross-modal tenha surgido inicialmente para a comunicação intraespecífica, mas, tendo facilitado a convivência com seres humanos, se desenvolveu para tornar a comunicação interespecífica mais eficiente. O reconhecimento cross-modal pode ser entendido como um reconhecimento verdadeiro e sugere um nível de processamento cognitivo mais alto e mais complexo. Dessa maneira, esta pesquisa traz as primeiras evidências de que cães domésticos são capazes de compreender (perceber e extrair informações relevantes de) as emoções e não apenas discriminá-las. As interações entre um indivíduo e o mundo são multidimensionais e perceber emoções de outros cães e de pessoas pode ser altamente funcional / Domestic dogs (Canis familiaris) are social animals that show a series of cognitive abilities to interact with other dogs and people. Although many studies have investigated the use of communicative cues, the sensitivity to attentional states, the capacity of discriminating faces and vocalizations and even the processing involved in facial expressions exploration, there are no evidences that these animals are capable of obtaining and using emotional information from facial and auditory expressions. The recognition of emotional states may be understood as an adaptive feature, since it plays a very important role in social context and might be crucial to the establishment and maintenance of long term relationship. Interested in investigating the abilities of reading and understanding emotions, we used a preferential looking paradigm to test family dogs of various breeds for their ability to cross-modally recognize emotions. We analysed the spontaneous looking behaviour of subjects when facing two visual stimuli (same individual, different facial expression) and hearing a sound (vocalization) which was congruent to one of the images. We used dogs and humans, females and males, positive valence and negative valence and left-presented and right-presented stimuli to assess their possible effects on the animals performance. The variable used for the analysis was congruence index: the proportion of time looking at the congruent images over the total looking time to the screens. Dogs demonstrated being capable of associating information from faces (photographs) and vocalizations (playbacks) and integrating them in a single mental representation, independent of species, sex, valence or side of stimulus presentation. The only effect we found was of species: although subjects had shown the ability to recognize both canine and human stimuli, they did it in a more robust way towards conspecifics. This may suggest that the ability to cross-modally recognize emotions has initially appeared for intraspecific communication but, having facilitated dog-human interactions, has developed to make the interspecific communication more efficient. Cross-modal recognition can be understood as true recognition and it suggests a higher and more complex level of cognitive processing. Therefore, this research brings the first evidences that domestic dogs are able to understand (perceive and extract relevant information from) emotions and not only discriminate them. The interactions between an individual and the world are multidimensional and reading other dogs and human emotions may be highly functional
|
8 |
Reconhecimento de emoções em cães domésticos (Canis familiaris): percepção de pistas faciais e auditivas na comunicação intra e interespecífica / Emotion recognition in domestic dogs (Canis familiaris): perception of facial and auditory cues in intra and interspecific communicationNatalia de Souza Albuquerque 14 October 2013 (has links)
Cães domésticos (Canis familiaris) são animais sociais que apresentam uma série de habilidades cognitivas para interagir com outros cães e com pessoas. Apesar de muitos estudos com cães terem investigado o uso de pistas comunicativas, a sensibilidade a estados de atenção, a capacidade de discriminação de faces e de vocalizações e até o processamento de expressões faciais, ainda não existem evidências de que esses animais são capazes de obter e utilizar simultaneamente informações emocionais de expressões faciais e auditivas. O reconhecimento de estados emocionais pode ser entendido como uma característica adaptativa, uma vez que possui um papel muito importante no contexto social e pode ser crucial para o estabelecimento e manutenção de relacionamentos em longo prazo. Interessados em investigar as habilidades de leitura e compreensão de emoções, utilizamos um paradigma de preferência de olhar para testar cães de família de várias raças em sua habilidade de reconhecer emoções de maneira cross-modal. Analisamos o comportamento visual espontâneo dos sujeitos frente a dois estímulos visuais (mesmo indivíduo, expressão facial diferente) e um som (vocalização) congruente a uma das duas imagens. Utilizamos estímulos caninos e humanos, de fêmeas e machos, com valência positiva e negativa, apresentados do lado esquerdo e do lado direito, e avaliamos seus possíveis efeitos sobre o desempenho dos animais. A variável utilizada para as análises foi o índice de congruência: a proporção de tempo de olhar para a imagem congruente em relação ao tempo total de olhar para as telas. Os cães demonstraram ser capazes de associar informações das faces (fotografias) e das vocalizações (playbacks) e integrá-las em uma única representação mental, independente da espécie, do sexo, da valência e do lado de apresentação do estímulo. O único efeito que encontramos foi o de espécie: apesar dos sujeitos apresentarem a habilidade de reconhecimento tanto para estímulos caninos quanto para humanos, o fizeram de maneira mais robusta para coespecíficos. Isto pode sugerir que a habilidade de reconhecer emoções de maneira cross-modal tenha surgido inicialmente para a comunicação intraespecífica, mas, tendo facilitado a convivência com seres humanos, se desenvolveu para tornar a comunicação interespecífica mais eficiente. O reconhecimento cross-modal pode ser entendido como um reconhecimento verdadeiro e sugere um nível de processamento cognitivo mais alto e mais complexo. Dessa maneira, esta pesquisa traz as primeiras evidências de que cães domésticos são capazes de compreender (perceber e extrair informações relevantes de) as emoções e não apenas discriminá-las. As interações entre um indivíduo e o mundo são multidimensionais e perceber emoções de outros cães e de pessoas pode ser altamente funcional / Domestic dogs (Canis familiaris) are social animals that show a series of cognitive abilities to interact with other dogs and people. Although many studies have investigated the use of communicative cues, the sensitivity to attentional states, the capacity of discriminating faces and vocalizations and even the processing involved in facial expressions exploration, there are no evidences that these animals are capable of obtaining and using emotional information from facial and auditory expressions. The recognition of emotional states may be understood as an adaptive feature, since it plays a very important role in social context and might be crucial to the establishment and maintenance of long term relationship. Interested in investigating the abilities of reading and understanding emotions, we used a preferential looking paradigm to test family dogs of various breeds for their ability to cross-modally recognize emotions. We analysed the spontaneous looking behaviour of subjects when facing two visual stimuli (same individual, different facial expression) and hearing a sound (vocalization) which was congruent to one of the images. We used dogs and humans, females and males, positive valence and negative valence and left-presented and right-presented stimuli to assess their possible effects on the animals performance. The variable used for the analysis was congruence index: the proportion of time looking at the congruent images over the total looking time to the screens. Dogs demonstrated being capable of associating information from faces (photographs) and vocalizations (playbacks) and integrating them in a single mental representation, independent of species, sex, valence or side of stimulus presentation. The only effect we found was of species: although subjects had shown the ability to recognize both canine and human stimuli, they did it in a more robust way towards conspecifics. This may suggest that the ability to cross-modally recognize emotions has initially appeared for intraspecific communication but, having facilitated dog-human interactions, has developed to make the interspecific communication more efficient. Cross-modal recognition can be understood as true recognition and it suggests a higher and more complex level of cognitive processing. Therefore, this research brings the first evidences that domestic dogs are able to understand (perceive and extract relevant information from) emotions and not only discriminate them. The interactions between an individual and the world are multidimensional and reading other dogs and human emotions may be highly functional
|
9 |
A systematic study of personification in synaesthesia : behavioural and neuroimaging studiesSobczak-Edmans, Monika January 2013 (has links)
In synaesthetic personification, personality traits and other human characteristics are attributed to linguistic sequences and objects. Such non-perceptual concurrents are different from those found in most frequently studied types of synaesthesia, in which the eliciting stimuli induce sensory experiences. Here, subjective reports from synaesthetes were analysed and the cognitive and neural mechanisms underlying personification were investigated. Specifically, the neural bases of personification were examined using functional MRI in order to establish whether brain regions implicated in social cognition are involved in implementing personification. Additional behavioural tests were used to determine whether personification of inanimate objects is automatic in synaesthesia. Subjective reports describing general characteristics of synaesthetic personification were collected using a semi-structured questionnaire. A Stroop-like paradigm was developed in order to examine the automaticity of object personification, similarly to the previous investigations. Synaesthetes were significantly slower in responding to incongruent than to congruent stimuli. This difference was not found in the control group. The functional neuroimaging investigations demonstrated that brain regions involved in synaesthetic personification of graphemes and objects partially overlap with brain areas activated in normal social cognition, including the temporo-parietal junction, precuneus and posterior cingulate cortex. Activations were observed in areas known to be correlated with mentalising, reflecting the social and affective character of concurrents described in subjective reports. Psychological factors linked with personification in previous studies were also assessed in personifiers, using empathy, mentalising and loneliness scales. Neither heightened empathy nor mentalising were found to be necessary for personification, but personifying synaesthetes in the study felt lonelier than the general population, and this was more pronounced in those who personified more. These results demonstrate that personification shares many defining characteristics with classical forms of synaesthesia. Ascribing humanlike characteristics to graphemes and objects is a spontaneous and automatic process, inducer-concurrent pairings are consistent over time and the phenomenological character of concurrents is reflected in functional neuroanatomy. Furthermore, the neuroimaging findings are consistent with the suggestions that synaesthetes have a lower threshold for activation brain regions implicated in self-projection and mentalising, which may facilitate the personification processes in synaesthesia.
|
10 |
Mechanisms of Cross-Modal Refinement by Visual ExperienceBrady, Daniel 28 February 2013 (has links)
Alteration of one sensory system can have striking effects on the processing and organization of the remaining senses, a phenomenon known as cross-modal plasticity. The goal of this thesis was to understand the circuit basis of this form of plasticity. I established the mouse as a model system for studying cross-modal plasticity by comparing population activity in visual cortex between animals reared in complete darkness from birth (DR) to those housed in a normal light/dark environment (LR). I found that secondary visual cortex (V2L) responds much more strongly to auditory stimuli in DR than LR. I provide evidence that there is a sensitive period for cross-modal responses that ends in early adulthood. I also show that exposure to light later in life reduces V2L auditory activity to LR levels. I recorded single units to show that there is a higher percentage of auditory responsive neurons in DR V2L. In collaboration with Lia Min in Michela Fagiolini’s laboratory, we discovered that this was associated with an increase in the number of projections from auditory thalamus and auditory cortex. We also provide evidence that V2L is multimodal from birth and becomes less so with visual experience. I examined several molecular pathways that are affected by dark-rearing to see if they are involved in cross-modal plasticity. I found that Nogo receptor (NgR), Lynx1, and Icam5 signaling all play a fundamental role in controlling the duration of plasticity. I also show that the hyperconnectivity in NgR -/- and DR mice leads to an increase in multisensory enhancement. In primary visual cortex, cross-modal influences were much weaker. Similar to V2L, the distribution of cell types was affected by NgR signaling. I also found that both the range of cross-modal influence and its sign (excitatory or inhibitory) is dependent on visual experience. Finally, I show that NgR signaling and the maturation of inhibitory circuits affect these two properties. Together, these results provide evidence of the molecular mechanisms underlying cross-modal plasticity. We believe that this will further our knowledge of how to improve rehabilitation strategies after loss of a sensory system.
|
Page generated in 0.0253 seconds