• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 12
  • 10
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Les habiletés olfactives des aveugles de naissance : organisation anatomo-fonctionnelle et aspects comportementaux

Beaulieu Lefebvre, Mathilde 08 1900 (has links)
La littérature décrit certains phénomènes de réorganisation physiologique et fonctionnelle dans le cerveau des aveugles de naissance, notamment en ce qui a trait au traitement de l’information tactile et auditive. Cependant, le système olfactif des aveugles n’a reçu que très peu d’attention de la part des chercheurs. Le but de cette étude est donc de comprendre comment les aveugles traitent l’information olfactive au niveau comportemental et d’investiguer les substrats neuronaux impliqués dans ce processus. Puisque, en règle générale, les aveugles utilisent leurs sens résiduels de façon compensatoire et que le système olfactif est extrêmement plastique, des changements au niveau de l’organisation anatomo-fonctionnelle pourraient en résulter. Par le biais de méthodes psychophysiques et d’imagerie cérébrale (Imagerie par Résonance Magnétique fonctionnelle-IRMf), nous avons investigué les substrats anatomo-fonctionnels sollicités par des stimuli olfactifs. Nous avons trouvé que les aveugles ont un seuil de détection plus bas que les voyants, mais que leur capacité à discriminer et identifier des odeurs est similaire au groupe contrôle. Ils ont aussi plus conscience de l’environnement olfactif. Les résultats d’imagerie révèlent un signal BOLD plus intense dans le cortex orbitofrontal droit, le thalamus, l’hippocampe droit et le cortex occipital lors de l’exécution d’une tâche de détection d’odeur. Nous concluons que les individus aveugles se fient d’avantage à leur sens de l’odorat que les voyants afin d’évoluer dans leur environnement physique et social. Cette étude démontre pour la première fois que le cortex visuel des aveugles peut être recruté par des stimuli olfactifs, ce qui prouve que cette région assume des fonctions multimodales. / It is generally acknowledged that people blind from birth develop supra-normal sensory abilities in order to compensate for their visual deficit. While extensive research has been done on the somatosensory and auditory modalities of the blind, information about their sense of smell remains scant. The goal of this study was therefore to understand olfactory processing in the blind at the behavioral and the neuroanatomical levels. Since blind individuals use their remaining senses in a compensatory way to assess their environment and since the olfactory system is highly plastic, it is likely to be susceptible to changes similar to those observed for tactile and auditory modalities. We used psychophysical testing and functional magnetic resonance imaging (fMRI) to investigate the neuronal substrates responsible for odor processing. Our data showed that blind subjects had a lower odor detection threshold compared to the sighted. However, no group differences were found for odor discrimination and odor identification. Interestingly, the OAS revealed that blind participants scored higher for odor awareness. Our fMRI data revealed stronger BOLD responses in the right lateral orbitofrontal cortex, bilateral medio-dorsal thalamus, right hippocampus and left occipital cortex in the blind participants during an odor detection task. We conclude that blind subjects rely more on their sense of smell than the sighted in order to assess their environment and to recognize places and people. This is the first demonstration that the visual cortex of the blind can also be recruited by odorants, thus adding new evidence to its multimodal function.
32

Delineating the Neural Circuitry Underlying Crossmodal Object Recognition in Rats

Reid, James 15 September 2011 (has links)
Previous research has indicated that the perirhinal cortex (PRh) and posterior parietal cortex (PPC) functionally interact to mediate crossmodal object representations in rats; however, it remains to be seen whether other cortical regions contribute to this cognitive function. The prefrontal cortex (PFC) has been widely implicated in crossmodal tasks and might underlie either a unified multimodal or amodal representation or comparison mechanism that allows for integration of object information across sensory modalities. The hippocampus (HPC) is also a strong candidate, with extensive polymodal inputs, and has been implicated in some aspects of object recognition. A series of lesion based experiments assessed the roles of HPC, PFC and PFC sub regions [medial prefrontal (mPFC) and orbitofrontal cortex (OFC)], revealing functional dissociations between these brain regions using two versions of crossmodal object recognition: 1. spontaneous crossmodal matching (CMM), which requires rats to compare between a stored tactile object representation and visually-presented objects to discriminate the novel and familiar stimuli; and 2. crossmodal object association (CMA), in which simultaneous pre-exposure to the tactile and visual elements of an object enhances CMM performance across long retention delays. Notably, while inclusive PFC lesions impaired both CMM and CMA tasks, selective OFC lesions disrupted only CMM, whereas selective mPFC damage did not impair performance on either task. Furthermore, there was no impact of HPC lesions on either CMM or CMA tasks. Thus, the PFC and the OFC play a selective role in crossmodal object recognition but the exact contributions and interactions of the regions will require further research to elucidate. / PDF Document / Natural Sciences and Engineering Research Council of Canada (NSERC)
33

Les habiletés olfactives des aveugles de naissance : organisation anatomo-fonctionnelle et aspects comportementaux

Beaulieu Lefebvre, Mathilde 08 1900 (has links)
La littérature décrit certains phénomènes de réorganisation physiologique et fonctionnelle dans le cerveau des aveugles de naissance, notamment en ce qui a trait au traitement de l’information tactile et auditive. Cependant, le système olfactif des aveugles n’a reçu que très peu d’attention de la part des chercheurs. Le but de cette étude est donc de comprendre comment les aveugles traitent l’information olfactive au niveau comportemental et d’investiguer les substrats neuronaux impliqués dans ce processus. Puisque, en règle générale, les aveugles utilisent leurs sens résiduels de façon compensatoire et que le système olfactif est extrêmement plastique, des changements au niveau de l’organisation anatomo-fonctionnelle pourraient en résulter. Par le biais de méthodes psychophysiques et d’imagerie cérébrale (Imagerie par Résonance Magnétique fonctionnelle-IRMf), nous avons investigué les substrats anatomo-fonctionnels sollicités par des stimuli olfactifs. Nous avons trouvé que les aveugles ont un seuil de détection plus bas que les voyants, mais que leur capacité à discriminer et identifier des odeurs est similaire au groupe contrôle. Ils ont aussi plus conscience de l’environnement olfactif. Les résultats d’imagerie révèlent un signal BOLD plus intense dans le cortex orbitofrontal droit, le thalamus, l’hippocampe droit et le cortex occipital lors de l’exécution d’une tâche de détection d’odeur. Nous concluons que les individus aveugles se fient d’avantage à leur sens de l’odorat que les voyants afin d’évoluer dans leur environnement physique et social. Cette étude démontre pour la première fois que le cortex visuel des aveugles peut être recruté par des stimuli olfactifs, ce qui prouve que cette région assume des fonctions multimodales. / It is generally acknowledged that people blind from birth develop supra-normal sensory abilities in order to compensate for their visual deficit. While extensive research has been done on the somatosensory and auditory modalities of the blind, information about their sense of smell remains scant. The goal of this study was therefore to understand olfactory processing in the blind at the behavioral and the neuroanatomical levels. Since blind individuals use their remaining senses in a compensatory way to assess their environment and since the olfactory system is highly plastic, it is likely to be susceptible to changes similar to those observed for tactile and auditory modalities. We used psychophysical testing and functional magnetic resonance imaging (fMRI) to investigate the neuronal substrates responsible for odor processing. Our data showed that blind subjects had a lower odor detection threshold compared to the sighted. However, no group differences were found for odor discrimination and odor identification. Interestingly, the OAS revealed that blind participants scored higher for odor awareness. Our fMRI data revealed stronger BOLD responses in the right lateral orbitofrontal cortex, bilateral medio-dorsal thalamus, right hippocampus and left occipital cortex in the blind participants during an odor detection task. We conclude that blind subjects rely more on their sense of smell than the sighted in order to assess their environment and to recognize places and people. This is the first demonstration that the visual cortex of the blind can also be recruited by odorants, thus adding new evidence to its multimodal function.
34

The audiovisual object

Connor, Andrew John Caldwell January 2017 (has links)
The ʻaudiovisual objectʼ is a fusion of sound object and visual object to create an identifiable perceptual phenomenon, which can be treated as a ʻbuilding blockʼ in the creation of audiovisual work based primarily on electroacoustic composition practice and techniques. This thesis explores how the audiovisual object can be defined and identified in existing works, and offers an examination of how it can be used as a compositional tool. The historical development of the form and the effect of the performance venue on audience immersion is also explored. The audiovisual object concept builds upon theories of electroacoustic composition and film sound design. The audiovisual object is defined in relation to existing concepts of the sound object and visual object, while synaesthesia and cross-modal perception are examined to show how the relationship between sound and vision in the audiovisual object can be strengthened. Electroacoustic composition and animation both developed through technological advances, either the manipulation of recorded sounds, or the manipulation of drawn/photographed objects. The key stages in development of techniques and theories in both disciplines are examined and compared against each other, highlighting correlations and contrasts. The physical space where the audiovisual composition is performed also has a bearing on how the work is perceived and received. Current standard performance spaces include acousmatic concert systems, which emphasize the audio aspect over the visual, and the cinema, which focuses on the visual. Spaces which afford a much higher level of envelopment in the work include hemispheric projection, while individual experience through virtual reality systems could become a key platform. The key elements of the audiovisual object, interaction between objects and their successful use in audiovisual compositions are also investigated in a series of case studies. Specific audiovisual works are examined to highlight techniques to create successful audiovisual objects and interactions. As this research degree is in creative practice, a portfolio of 4 composed works is also included, with production notes explaining the inspiration behind and symbolism within each work, along with the practical techniques employed in their creation. The basis for each work is a short electroacoustic composition which has then been developed with abstract 3D CGI animation into an audiovisual composition, demonstrating the development of my own practice as well as exploring the concept of the audiovisual object. The concept of the audiovisual object draws together existing theories concerning the sound object, visual perception, and phenomenology. The concept, the associated investigation of how audiovisual compositions have evolved over time, and the analysis and critique of case studies based on this central concept contribute both theory and creative practice principles to this form of artistic creativity. This thesis forms a basis for approaching the creative process both as a creator and critic, and opens up a research pathway for further investigation.
35

Perspective Taking and Relative Clause Comprehension: A Cross-Modal Picture Priming Study

Jones, Nicola C 30 June 2010 (has links)
Fourteen young adults participated in a cross-modal picture priming study. Perspective shift processing, in four types of relative clause sentences and in control sentences, was assessed using reaction times. Predictions were: 1) the easier the perspective shifts, the faster the reaction times and 2) subject relative clauses would reveal a priming effect versus attenuated or no priming in object relative clauses due to difficulty following perspective. A priming effect was observed for 1- switch relative clause sentences and for control sentences, while no priming effect was observed for 0 switch, 1+ switch, or 2 switch sentences. Results suggest that variations in local syntactic constructions and word order facilitated relative clause processing. Violations of semantic expectations and noun-noun-verb distance in following perspective can both contribute to the complexity of relative clause processing.
36

Robust and comprehensive joint image-text representations / Recherche multimédia à large échelle

Tran, Thi Quynh Nhi 03 May 2017 (has links)
La présente thèse étudie la modélisation conjointe des contenus visuels et textuels extraits à partir des documents multimédias pour résoudre les problèmes intermodaux. Ces tâches exigent la capacité de ``traduire'' l'information d'une modalité vers une autre. Un espace de représentation commun, par exemple obtenu par l'Analyse Canonique des Corrélation ou son extension kernelisée est une solution généralement adoptée. Sur cet espace, images et texte peuvent être représentés par des vecteurs de même type sur lesquels la comparaison intermodale peut se faire directement.Néanmoins, un tel espace commun souffre de plusieurs déficiences qui peuvent diminuer la performance des ces tâches. Le premier défaut concerne des informations qui sont mal représentées sur cet espace pourtant très importantes dans le contexte de la recherche intermodale. Le deuxième défaut porte sur la séparation entre les modalités sur l'espace commun, ce qui conduit à une limite de qualité de traduction entre modalités. Pour faire face au premier défaut concernant les données mal représentées, nous avons proposé un modèle qui identifie tout d'abord ces informations et puis les combine avec des données relativement bien représentées sur l'espace commun. Les évaluations sur la tâche d'illustration de texte montrent que la prise en compte de ces information fortement améliore les résultats de la recherche intermodale. La contribution majeure de la thèse se concentre sur la séparation entre les modalités sur l'espace commun pour améliorer la performance des tâches intermodales. Nous proposons deux méthodes de représentation pour les documents bi-modaux ou uni-modaux qui regroupent à la fois des informations visuelles et textuelles projetées sur l'espace commun. Pour les documents uni-modaux, nous suggérons un processus de complétion basé sur un ensemble de données auxiliaires pour trouver les informations correspondantes dans la modalité absente. Ces informations complémentaires sont ensuite utilisées pour construire une représentation bi-modale finale pour un document uni-modal. Nos approches permettent d'obtenir des résultats de l'état de l'art pour la recherche intermodale ou la classification bi-modale et intermodale. / This thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification.
37

Étude expérimentale du symbolisme sonore et réflexions évolutionnaires / Experimental assessment of sound symbolism and evolutionary considerations

De carolis, Léa 20 June 2019 (has links)
Un mot et une signification peuvent entretenir une relation naturelle, motivée, plutôt qu’arbitraire, via la composition segmentale dudit mot. Ce phénomène est souvent appelé symbolisme sonore, même si nous préfèrerons employer le terme de motivation par la suite. Dans la littérature, des éléments en faveur d’une relation motivée apparaissent à la fois dans des analyses translinguistiques et des expérimentations psycholinguistiques. Par exemple, une voyelle fermée telle que [i] est davantage associée à la petitesse qu’une voyelle ouverte telle que [a], davantage associée à une taille importante. Ce schéma apparait à la fois dans les lexiques de différentes langues (Ohala, 1997) et dans les résultats de tâches d’associations (Sapir, 1929), avec des participants parlant différentes langues et à différents âges. Du fait de ces éléments communs (Iwasaki et al., 2007) et de leur précocité (Ozturk et al., 2013), il est possible de formuler l’hypothèse que la motivation a pu être un élément clé dans l’émergence du langage (Imai et al., 2015), en facilitant les interactions et l’accord entre les individus.Cette thèse offre plusieurs contributions méthodologiques à l’étude des associations motivées entre formes phonétiques et significations. La première étude a pour objectif de déterminer si des caractéristiques associées à des animaux (e.g. la dangerosité) ou à leurs catégories biologiques (oiseaux vs. poissons, sur la base de l’étude conduite par Berlin en 1994)peuvent représenter des concepts pertinents dans la mise en évidence d’associations motivées, en se basant sur l’hypothèse que les animaux étaient des sujets récurrents et importants des premières interactions langagières (en tant que potentielles sources de nourriture ou de menace). Cette étude a soulevé des questions méthodologiques, qui ont conduit à une seconde étude, dont le but était de comparer différents protocoles de tâches d’association que l’on peut trouver dans les études sur la motivation. En effet, les protocoles et les populations étudiées varient d’une étude à l’autre, et il est ainsi difficile de déterminer quel est le contraste le plus déterminant pour la mise en valeur expérimentale d’associations motivées : le contraste phonétique, ou le contraste conceptuel. Cette deuxième étude a ainsi permis d’apprécier l’influence de différents protocoles en contrôlant d’autres sources de variations à travers les différentes tâches. Elle a aussi permis de mettre en évidence la nécessité d’étudier davantage les processus cognitifs impliqués dans les associations. Ainsi, nous avons poursuivi notre investigation en noustournant vers l’influence de la forme des lettres, un facteur potentiellement déterminant dans les tâches ‘bouba-kiki’, comme l’ont proposé Cuskley et al. (2015). Bouba-kiki est un paradigme très répandu dans l’étude des associations motivées et consiste à associer des pseudomots avec des formes pointues ou arrondies. Cuskley et al. ont proposé qu’une forme pointuefaciliterait le traitement d’un pseudo-mot contenant une lettre anguleuse, telle que ‘k’. Dans notre troisième étude, nous avons adopté une version implicite de la tâche bouba-kiki, plus précisément une tâche de décision lexicale, en nous basant sur une étude antérieure de Westbury (2005). Dans cette expérience précédente, des cadres pointus et arrondis, dans lesquels apparaissaient les stimuli linguistiques, facilitaient le traitement de pseudo-mots en fonction de leurs compositions segmentales (e.g. les formes pointues accéléraient le traitement d’occlusives non-voisées telle que [k]). Nous avons manipulé les formes des lettres via deux polices de caractères différentes, une anguleuse et une curviligne, et avons ainsi essayé de démêler lesimpacts respectifs des formes des cadres et des polices sur les temps de réponse des participants. Les résultats ont mis en lumière l’importance de prendre en considération des processus visuels de bas-niveau dans l’étude des associations motivées. / Sound symbolism, or motivation as we will later refer to it, corresponds to the assumption that some words have a natural relation with their significations, instead of an arbitrary one, through their segmental composition. Some evidence stands out from the literature, from cross-linguistic investigations to psycholinguistic experimentations. For example, a closed vowel [i] is more associated to smallness, while an open vowel like [a] is more associated to largeness. This pattern appears in the lexicon of different languages (e.g. Ohala, 1997), as well as in results of associative tasks (Sapir, 1929) with participants speakingdifferent languages and at different life stages. These commonalities (e.g. Iwasaki, Vinson, & Vigliocco, 2007) and their earliness (e.g. Ozturk, Krehm, & Vouloumanos, 2013) enable to formulate the hypothesis that motivation may have represented a key-driver in the emergence of language (Imai et al., 2015), by facilitating interactions and agreement between individuals.This thesis offers several methodological contributions to the study of motivated associations. The first study of this thesis aimed at assessing whether animal features (e.g. dangerousness) or biological classes (birds vs. fish, based on Berlin, 1994) would be relevant concepts for highlighting motivated associations, based on the assumption that animals would have represented suitable candidates for the content of early interactions (as potential sources of food and threats). It raised issues regarding methodological settings which led to the second study consisting in comparing different protocols of association tasks that are found across experimentations. Indeed, in the literature, the settings and population vary from one study to another, and it is therefore not possible to determine which one of the two types of contrasts implied in association tasks is determinant for making associations: either the phonetic one or the conceptual one. This second study permitted to appraise the influence of different protocols by controlling for other sources of variation across the tasks. It also highlighted the need to better analyze the cognitive processes involved in motivated associations. This led us to complement our investigation of phonetic and conceptual contrast with a study on the influence of the graphemic shapes of letters, following Cuskley, Simmer and Kirby (2015)’s proposal of an impact of the shapes of letters in the bouba-kiki task. This task is a well-known paradigm in the study of motivated associations, based on associating pseudo-words with round or spiky shapes. Cuskley et al. suggested that a spiky shape would facilitate the processing of a pseudoword that contains an angular letter such as ‘k’. On our third study, we considered an implicit version of the ‘bouba-kiki’ task, namely a lexical decision task, building on a previous experiment by Westbury (2005). In this experiment, spiky and round frames, in which the linguistic stimuli appeared, seemed to facilitate the processing of pseudo-words according to their segmental composition (e.g. spiky frames would facilitate the processing of voiceless plosives like [k]). We manipulated the shapes of letters with two different fonts for displaying linguistic stimuli – one angular and one curvy – and tried to disentangle the respective impacts of frames and of these fonts on the participants’ response times. The results highlighted the importance of taking into account low-level visual processes in the study of motivated associations.
38

Learning to Generate Things and Stuff: Guided Generative Adversarial Networks for Generating Human Faces, Hands, Bodies, and Natural Scenes

Tang, Hao 27 May 2021 (has links)
In this thesis, we mainly focus on image generation. However, one can still observe unsatisfying results produced by existing state-of-the-art methods. To address this limitation and further improve the quality of generated images, we propose a few novel models. The image generation task can be roughly divided into three subtasks, i.e., person image generation, scene image generation, and cross-modal translation. Person image generation can be further divided into three subtasks, namely, hand gesture generation, facial expression generation, and person pose generation. Meanwhile, scene image generation can be further divided into two subtasks, i.e., cross-view image translation and semantic image synthesis. For each task, we have proposed the corresponding solution. Specifically, for hand gesture generation, we have proposed the GestureGAN framework. For facial expression generation, we have proposed the Cycle-in-Cycle GAN (C2GAN) framework. For person pose generation, we have proposed the XingGAN and BiGraphGAN frameworks. For cross-view image translation, we have proposed the SelectionGAN framework. For semantic image synthesis, we have proposed the Local and Global GAN (LGGAN), EdgeGAN, and Dual Attention GAN (DAGAN) frameworks. Although each method was originally proposed for a certain task, we later discovered that each method is universal and can be used to solve different tasks. For instance, GestureGAN can be used to solve both hand gesture generation and cross-view image translation tasks. C2GAN can be used to solve facial expression generation, person pose generation, hand gesture generation, and cross-view image translation. SelectionGAN can be used to solve cross-view image translation, facial expression generation, person pose generation, hand gesture generation, and semantic image synthesis. Moreover, we explore cross-modal translation and propose a novel DanceGAN for audio-to-video translation.
39

Une approche incarnée du vieillissement normal et pathologique : compréhension du fonctionnement mnésique selon les interactions entre mémoire et perception / An embodied approach of healthy aging and Alzheimer’s disease : Understanding memory functioning through the interactions between memory and perception

Vallet, Guillaume 14 May 2012 (has links)
Le vieillissement et la maladie d’Alzheimer (MA) sont caractérisés par des difficultés mnésiques associées à leurs altérations sensorielles et perceptives. Ces liens s’expliqueraient naturellement par les approches incarnées de la cognition qui définissent les connaissances comme ancrées dans leurs propriétés modales (sensori-motrices). Afin de tester ces approches incarnées, deux séries d’expériences ont été conduites auprès de jeunes adultes, personnes âgées saines et patient MA. Ces expériences combinaient une batterie complète de tests neuropsychologiques et un paradigme d’amorçage intersensoriel (audition vers vision). L’originalité fut l’ajout, pour la moitié des amorces auditives, d’un masque visuel sans signification. L’Expérience 1 était constituée deux phases distinctes, alors que dans l’Expérience 2, l’amorce et la cible étaient présentées dans un même essai. Les résultats démontrent un effet d’amorçage intersensoriel pour les jeunes adultes et les personnes âgées. Le masque a interféré avec cet effet d’amorçage, mais uniquement lorsque l’amorce et la cible correspondent à une même connaissance. Cette interférence et sa spécificité démontrent que les jeunes adultes comme les personnes âgées auraient des connaissances modales. En revanche, les patients MA ne présentent pas d’effet d’amorçage alors que celui-ci est de nature perceptive. Ces résultats supportent l’idée d’une déconnexion cérébrale dans la MA. L’ensemble des données permet de supposer que les difficultés mnésiques dans le vieillissement s’expliqueraient essentiellement par une dégradation de la qualité de leurs connaissances en lien avec leur perception. Les troubles mnésiques dans la MA proviendraient quant à eux d’un déficit d’intégration dynamique des différentes composantes des connaissances. Ces approches placent les interactions entre mémoire et perception au cœur du fonctionnement mnésique. / Normal aging as Alzheimer’s disease (AD) are characterized by memory disorders associated with the sensory and perceptive decline. These links may be easily explained in the embodied cognition theory, because in this one, knowledge remains grounded in its properties (mainly sensory-motor). The objective of the present research is to assess the embodied cognition theory applied to young adults, healthy elderly and patient with AD. In two experiments, a complete neuropsychological battery was associated with a cross-modal priming paradigm (audition to vision). The novelty of the paradigm was to present a visual meaningless mask for half of the sound primes. Experiment 1 was composed of two distinct phases, whereas the prime and the target were presented in the same trial in Experiment 2. The results demonstrated a significant cross-modal priming effect in young and healthy elderly adults. The mask has interfered with the priming effect only in the semantic congruent situations. The mask interference and its specificity demonstrate that young and elderly adults have modal knowledge. Reversely, the patients with AD did not show any priming effect while the effect is perceptual. This result supports the cerebral disconnection hypothesis in AD. The data taken together suggest that memory disorders in normal aging could be related to a degradation of the quality of their perception and thus of knowledge. Memory impairments in AD might come from an integration disorder to dynamically bind the different components of a memory. The present research support the embodied cognition theory and demonstrates the interest of this kind of approach to explore memory functioning in neuropsychology, such as in aging. These approaches put on the foreground the interactions between memory and perception.
40

Effect of Attentional Capture and Cross-Modal Interference in Multisensory Cognitive Processing

Jennings, Michael 01 January 2018 (has links)
Despite considerable research, the effects of common types of noise on verbal and spatial information processing are still relatively unknown. Three experiments, using convenience sampling were conducted to investigate the effect of auditory interference on the cognitive performance of 24 adult men and women during the Stroop test, perception of object recognition and spatial location tasks, and the perception of object size, shape, and spatial location tasks. The data were analyzed using univariate analysis of variance and 1-way multivariate analysis of variance. The Experiment 1 findings indicated reaction time performance for gender and age group was affected by auditory interference between experimental conditions, and recognition accuracy was affected only by experimental condition. The Experiment 2a results showed reaction time performance for recognizing object features was affected by auditory interference between age groups, and recognition accuracy by experimental condition. The Experiment 2b results demonstrated reaction time performance for detecting the spatial location of objects was affected by auditory interference between age groups. In addition, reaction time was affected by the type of interference and spatial location. Further, recognition accuracy was affected by interference condition and spatial location. The Experiment 3 findings suggested reaction time performance for assessing part-whole relationships was affected by auditory interference between age groups. Further, recognition accuracy was affected by interference condition between experimental groups. This study may create social change by affecting the design of learning and workplace environments, the neurological correlates of auditory and visual stimuli, and the pathologies of adults such as attention deficit hyperactivity disorder.

Page generated in 0.078 seconds