Spelling suggestions: "subject:"cisual system."" "subject:"4visual system.""
121 |
Plasticité intermodale chez le hamster énucléé à la naissance : Études de la distribution des interneurones CaBPir dans les cortex visuel et auditif primairesDesgent, Sébastien 01 1900 (has links)
No description available.
|
122 |
Détection des émotions à partir de vidéos dans un environnement non contrôlé / Detection of emotions from video in non-controlled environmentKhan, Rizwan Ahmed 14 November 2013 (has links)
Dans notre communication quotidienne avec les autres, nous avons autant de considération pour l’interlocuteur lui-même que pour l’information transmise. En permanence coexistent en effet deux modes de transmission : le verbal et le non-verbal. Sur ce dernier thème intervient principalement l’expression faciale avec laquelle l’interlocuteur peut révéler d’autres émotions et intentions. Habituellement, un processus de reconnaissance d’émotions faciales repose sur 3 étapes : le suivi du visage, l’extraction de caractéristiques puis la classification de l’expression faciale. Pour obtenir un processus robuste apte à fournir des résultats fiables et exploitables, il est primordial d’extraire des caractéristiques avec de forts pouvoirs discriminants (selon les zones du visage concernées). Les avancées récentes de l’état de l’art ont conduit aujourd’hui à diverses approches souvent bridées par des temps de traitement trop couteux compte-tenu de l’extraction de descripteurs sur le visage complet ou sur des heuristiques mathématiques et/ou géométriques.En fait, aucune réponse bio-inspirée n’exploite la perception humaine dans cette tâche qu’elle opère pourtant régulièrement. Au cours de ces travaux de thèse, la base de notre approche fut ainsi de singer le modèle visuel pour focaliser le calcul de nos descripteurs sur les seules régions du visage essentielles pour la reconnaissance d’émotions. Cette approche nous a permis de concevoir un processus plus naturel basé sur ces seules régions émergentes au regard de la perception humaine. Ce manuscrit présente les différentes méthodologies bio-inspirées mises en place pour aboutir à des résultats qui améliorent généralement l’état de l’art sur les bases de référence. Ensuite, compte-tenu du fait qu’elles se focalisent sur les seules parties émergentes du visage, elles améliorent les temps de calcul et la complexité des algorithmes mis en jeu conduisant à une utilisation possible pour des applications temps réel. / Communication in any form i.e. verbal or non-verbal is vital to complete various daily routine tasks and plays a significant role inlife. Facial expression is the most effective form of non-verbal communication and it provides a clue about emotional state, mindset and intention. Generally automatic facial expression recognition framework consists of three step: face tracking, feature extraction and expression classification. In order to built robust facial expression recognition framework that is capable of producing reliable results, it is necessary to extract features (from the appropriate facial regions) that have strong discriminative abilities. Recently different methods for automatic facial expression recognition have been proposed, but invariably they all are computationally expensive and spend computational time on whole face image or divides the facial image based on some mathematical or geometrical heuristic for features extraction. None of them take inspiration from the human visual system in completing the same task. In this research thesis we took inspiration from the human visual system in order to find from where (facial region) to extract features. We argue that the task of expression analysis and recognition could be done in more conducive manner, if only some regions are selected for further processing (i.e.salient regions) as it happens in human visual system. In this research thesis we have proposed different frameworks for automatic recognition of expressions, all getting inspiration from the human vision. Every subsequently proposed addresses the shortcomings of the previously proposed framework. Our proposed frameworks in general, achieve results that exceeds state-of-the-artmethods for expression recognition. Secondly, they are computationally efficient and simple as they process only perceptually salient region(s) of face for feature extraction. By processing only perceptually salient region(s) of the face, reduction in feature vector dimensionality and reduction in computational time for feature extraction is achieved. Thus making them suitable for real-time applications.
|
123 |
Visual experience-dependent oscillations in the mouse visual systemSamuel T Kissinger (8086100) 06 December 2019 (has links)
<p><a></a><a>The visual
system is capable of interpreting immense sensory complexity, allowing us to
quickly identify behaviorally relevant stimuli in the environment. It performs
this task with a hierarchical organization that works to detect, relay, and
integrate visual stimulus features into an interpretable form. To understand
the complexities of this system, visual neuroscientists have benefited from the
many advantages of using mice as visual models. Despite their poor visual acuity,
these animals possess surprisingly complex visual systems, and have been
instrumental in understanding how visual features are processed in the primary
visual cortex (V1). However, a growing body of literature has shown that
primary sensory areas like V1 are capable of more than basic feature detection,
but can express neural activity patterns related to learning, memory,
categorization, and prediction. </a></p>
<p>Visual
experience fundamentally changes the encoding and perception of visual stimuli
at many scales, and allows us to become familiar with
environmental cues. However, the neural
processes that govern visual familiarity are poorly understood. By exposing
awake mice to repetitively presented visual stimuli over several days, we
observed the emergence of low frequency
oscillations in the primary visual cortex (V1). The oscillations emerged in
population level responses known as visually evoked potentials (VEPs), as well
as single-unit responses, and were not observed before the perceptual
experience had occurred. They were also not evoked by novel visual stimuli,
suggesting that they represent a new form of visual familiarity in the form of
low frequency oscillations. The oscillations also required the muscarinic
acetylcholine receptors (mAChRs) for
their induction and expression, highlighting the importance of the cholinergic
system in this learning and memory-based phenomenon. Ongoing visually evoked
oscillations were also shown to increase the VEP amplitude of incoming visual
stimuli if the stimuli were presented at the high excitability phase of the
oscillations, demonstrating how neural activity with unique temporal dynamics
can be used to influence visual processing.</p>
<p>Given the necessity of
perceptual experience for the strong expression of these oscillations and their
dependence on the cholinergic system, it was clear we had discovered a
phenomenon grounded in visual learning or memory. To further validate this, we
characterized this response in a mouse model of Fragile X syndrome (FX), the
most common inherited form of autism and a condition with known visual
perceptual learning deficits. Using a multifaceted experimental approach, a
number of neurophysiological differences were found in the oscillations displayed
in FX mice. Extracellular recordings revealed shorter durations and lower power
oscillatory activity in FX mice. Furthermore, we found that the frequency of
peak oscillatory activity was significantly decreased in FX mice, demonstrating
a unique temporal neural impairment not previously reported in FX. In
collaboration with Dr. Christopher J. Quinn at Purdue, we performed functional
connectivity analysis on the extracellularly recorded spikes from WT and FX
mice. This analysis revealed significant impairments in functional connections
from multiple layers in FX mice after the perceptual experience; some of which
were validated by another graduate student (Qiuyu Wu) using Channelrhodopsin-2
assisted circuit mapping (CRACM). Together, these results shed new light on how
visual stimulus familiarity is differentially encoded in FX via persistent
oscillations, and allowed us to identify impairments in cross layer
connectivity that may underlie these differences. </p>
<p>Finally,
we asked whether these oscillations are observable in other brain areas or are intrinsic
to V1. Furthermore, we sought to determine if the oscillating unit populations
in V1 possess uniform firing dynamics, or contribute differentially to the
population level response. By performing paired recordings, we did not find
prominent oscillatory activity in two visual thalamic nuclei (dLGN and LP) or a
nonvisual area (RSC) connected to V1, suggesting the oscillations may not
propagate with similar dynamics via cortico-thalamic connections or
retrosplenial connections, <a>but may either be uniquely distributed
across the visual hierarchy or predominantly</a> restricted to V1. Using
K-means clustering on a large population of oscillating units in V1, we found
unique temporal profiles of visually evoked responses, demonstrating distinct
contributions of different unit sub-populations to the oscillation response
dynamics.</p>
|
124 |
CONTEXTUAL MODULATION OF NEURAL RESPONSES IN THE MOUSE VISUAL SYSTEMAlexandr Pak (10531388) 07 May 2021 (has links)
<div>The visual system is responsible for processing visual input, inferring its environmental causes, and assessing its behavioral significance that eventually relates to visual perception and guides animal behavior. There is emerging evidence that visual perception does not simply mirror the outside world but is heavily influenced by contextual information. Specifically, context might refer to the sensory, cognitive, and/or behavioral cues that help to assess the behavioral relevance of image features. One of the most famous examples of such behavior is visual or optical illusions. These illusions contain sensory cues that induce a subjective percept that is not aligned with the physical nature of the stimulation, which, in turn, suggests that a visual system is not a passive filter of the outside world but rather an active inference machine.</div><div>Such robust behavior of the visual system is achieved through intricate neural computations spanning several brain regions that allow dynamic visual processing. Despite the numerous attempts to gain insight into those computations, it has been challenging to decipher the circuit-level implementation of contextual processing due to technological limitations. These questions are of great importance not only for basic research purposes but also for gaining deeper insight into neurodevelopmental disorders that are characterized by altered sensory experiences. Recent advances in genetic engineering and neurotechnology made the mouse an attractive model to study the visual system and enabled other researchers and us to gain unprecedented cellular and circuit-level insights into neural mechanisms underlying contextual processing.</div><div>We first investigated how familiarity modifies the neural representation of stimuli in the mouse primary visual cortex (V1). Using silicon probe recordings and pupillometry, we probed neural activity in naive mice and after animals were exposed to the same stimulus over the course of several days. We have discovered that familiar stimuli evoke low-frequency oscillations in V1. Importantly, those oscillations were specific to the spatial frequency content of the familiar stimulus. To further validate our findings, we investigated how this novel form of visual learning is represented in serotonin-transporter (SERT) deficient mice. These transgenic animals have been previously found to have various neurophysiological alterations. We found that SERT-deficient animals showed longer oscillatory spiking activity and impaired cortical tuning after visual learning. Taken together, we discovered a novel phenomenon of familiarity-evoked oscillations in V1 and utilized it to reveal altered perceptual learning in SERT-deficient mice.</div><div>16</div><div>Next, we investigated how spatial context influences sensory processing. Visual illusions provide a great opportunity to investigate spatial contextual modulation in early visual areas. Leveraging behavioral training, high-density silicon probe recordings, and optogenetics, we provided evidence for an interplay of feedforward and feedback pathways during illusory processing in V1. We first designed an operant behavioral task to investigate illusory perception in mice. Kanizsa illusory contours paradigm was then adapted from primate studies to mouse V1 to elucidate neural correlates of illusory responses in V1. These experiments provided behavioral and neurophysiological evidence for illusory perception in mice. Using optogenetics, we then showed that suppression of the lateromedial area inhibits illusory responses in mouse V1. Taken together, we demonstrated illusory responses in mice and their dependence on the top-down feedback from higher-order visual areas.</div><div>Finally, we investigated how temporal context modulates neural responses by combining silicon probe recordings and a novel visual oddball paradigm that utilizes spatial frequency filtered stimuli. Our work extended prior oddball studies by investigating how adaptation and novelty processing depends on the tuning properties of neurons and their laminar position. Furthermore, given that reduced adaptation and sensory hypersensitivity are one of the hallmarks of altered sensory experiences in autism, we investigated the effects of temporal context on visual processing in V1 of a mouse model of fragile X syndrome (FX), a leading monogenetic cause of autism. We first showed that adaptation was modulated by tuning properties of neurons in both genotypes, however, it was more confined to neurons preferring the adapted feature in FX mice. Oddball responses, on the other hand, were modulated by the laminar position of the neurons in WT with the strongest novelty responses in superficial layers, however, they were uniformly distributed across the cortical column in FX animals. Lastly, we observed differential processing of omission responses in FX vs. WT mice. Overall, our findings suggest that reduced adaptation and increased oddball processing might contribute to altered perceptual experiences in FX and autism.</div>
|
125 |
The role of pericytes in the regulation of retinal microvasculature dynamics in health and diseaseVillafranca-Baughman, Deborah 12 1900 (has links)
No description available.
|
126 |
Visual saliency extraction from compressed streams / Extraction de la saillance visuelle à partir de flux compressésAmmar, Marwa 15 June 2017 (has links)
Les fondements théoriques pour la saillance visuelle ont été dressés, il y a 35 ans, par Treisman qui a proposé "feature-integration theory" pour le système visuel humain: dans n’importe quel contenu visuel, certaines régions sont saillantes en raison de la différence entre leurs caractéristiques (intensité, couleur, texture, et mouvement) et leur voisinage. Notre thèse offre un cadre méthodologique et expérimental compréhensif pour extraire les régions saillantes directement des flux compressés (MPEG-4 AVC et HEVC), tout en minimisant les opérations de décodage. L’extraction de la saillance visuelle à partir du flux compressé est à priori une contradiction conceptuelle. D’une part, comme suggéré par Treisman, dans un contenu vidéo, la saillance est donnée par des singularités visuelles. D’autre part, afin d’éliminer la redondance visuelle, les flux compressés ne devraient plus préserver des singularités. La thèse souligne également l’avantage pratique de l’extraction de la saillance dans le domaine compressé. Dans ce cas, nous avons démontré que, intégrée dans une application de tatouage robuste de la vidéo compressée, la carte saillance agit comme un outil d’optimisation, ce qui permet d’augmenter la transparence (pour une quantité d’informations insérées et une robustesse contre les attaques prescrites) tout en diminuant la complexité globale du calcul. On peut conclure que la thèse démontre aussi bien méthodologiquement que expérimentalement que même si les normes MPEG-4 AVC et HEVC ne dépendent pas explicitement d’aucun principe de saillance visuelle, leurs flux préservent cette propriété remarquable reliant la représentation numérique de la vidéo au mécanisme psycho-cognitifs humains / The theoretical ground for visual saliency was established some 35 years ago by Treisman who advanced the integration theory for the human visual system: in any visual content, some regions are salient (appealing) because of the discrepancy between their features (intensity, color, texture, motion) and the features of their surrounding areas. This present thesis offers a comprehensive methodological and experimental framework for extracting the salient regions directly from video compressed streams (namely MPEG-4 AVC and HEVC), with minimal decoding operations. Note that saliency extraction from compressed domain is a priori a conceptual contradiction. On the one hand, as suggested by Treisman, saliency is given by visual singularities in the video content. On the other hand, in order to eliminate the visual redundancy, the compressed streams are no longer expected to feature singularities. The thesis also brings to light the practical benefit of the compressed domain saliency extraction. In this respect, the case of robust video watermarking is targeted and it is demonstrated that the saliency acts as an optimization tool, allowing the transparency to be increased (for prescribed quantity of inserted information and robustness against attacks) while decreasing the overall computational complexity. As an overall conclusion, the thesis methodologically and experimentally demonstrates that although the MPEG-4 AVC and the HEVC standards do not explicitly rely on any visual saliency principle, their stream syntax elements preserve this remarkable property linking the digital representation of the video to sophisticated psycho-cognitive mechanisms
|
127 |
Mouvements oculaires chez l'enfant dyslexique / Eye Movements in Dyslexic ChildrenTiadi, Bi Kuyami Guy-Aimé 23 September 2016 (has links)
La dyslexie développementale est un trouble neuro-développemental qui affecte spécifiquement l’apprentissage du langage écrit d’environ 10% des enfants en âge scolaire. Ces dernières années, plusieurs études ont montré la présence des anomalies oculomotrices chez les enfants dyslexiques. Toutefois, plusieurs questions sur la performance oculomotrice des enfants dyslexiques sont encore sans réponse ou restent peu étudiées.Dans cette thèse, nous avons réalisé trois études afin d’examiner l’oculomotricité des enfants dyslexiques comparativement à celle des enfants non-dyslexiques. Pour la première fois, nous avons enregistré les saccades verticales chez les enfants dyslexiques (étude 1). Les résultats ont montré que, comparés aux enfants non-dyslexiques de même âge chronologique, les enfants dyslexiques avaient des latences plus longues, de faibles précisions et des vitesses saccadiques ayant une asymétrie haut/bas. Les études 2 et 3 nous ont permis d’élargir les investigations, respectivement, sur la fixation visuelle et sur la reconnaissance visuo-auditive phonologique chez les enfants dyslexiques. Nous avons reporté une fixation visuelle et une reconnaissance visuo-auditive phonologique de faible qualité chez les enfants dyslexiques par rapport aux groupes d’enfants-non dyslexiques de même âge chronologique et de même âge de lecture.Nous avons suggéré que le développement atypique du système visuel magnocellulaire, de même que celui des structures cortico-sous-corticales et des difficultés attentionnelles expliqueraient les perturbations oculomotrices des enfants dyslexiques. Ainsi, nous avons proposé des voies de rééducation oculomotrice en vue de contribuer à l’amélioration des capacités de lecture des enfants dyslexiques.Mots-clés: Mouvements oculaires, saccades, fixations, système visuel, cortex visuel, structures cortico-sous-corticales, attention, dyslexie développementale. / ABSTRACTDevelopmental dyslexia is a neurodevelopmental disorder that affects written language learning of about 10% of school-age children. During the last years, several studies have shown the presence of oculomotor abnormalities in dyslexia. However, several questions about the oculomotor performance of dyslexic children are still unanswered.We conducted three studies to examine eye movements of dyslexic children with respect to non-dyslexic age-matched children. In the first of our study, we investigated vertical saccades performance in dyslexic children. The results showed that, dyslexic children had longer latencies, poor precision and slow saccadic speed with up / down asymmetry. Studies 2 and 3 respectively allowed us to enlarge the investigation of visual fixation as well as visual-auditory phonological capabilities in dyslexic children. We reported a low quality of visual fixation and visual-auditory phonological recognition in children with dyslexia compared with the non-dyslexic children.Taken together, all these findings suggested, in dyslexic children, an immaturity of the magnocellular visual system, as well as of the cortico-subcortical structures responsible for oculomotor performances. Attentional capabilities, that are poor in dyslexic children, would be also explained their oculomotor deficiencies reported. Thus, we proposed oculomotor rehabilitation that could be able to improve reading skills in dyslexia.Key words: Eye movements, saccades, fixations, visual system, visual cortex, cortical and sub-cortical structures, attention, developmental dyslexia.
|
128 |
The development of outer retinal photoresponsivity and the effects of sensory deprivationBonezzi, Paul J. January 2020 (has links)
No description available.
|
129 |
L'effet du "bruit de fond couleur" sur l'estimation de quantités relatives des carrés de différentes couleurs dans des stimuli "damier" à plusieurs couleurs chez les sujets humains : étude psychophysique et computationnelleMilosz, Julien 08 1900 (has links)
La prise de décision est une capacité générale de choisir entre deux ou plusieurs alternatives compte tenu de l’information courante et des objectifs en jeu. Il est généralement présumé qu’au niveau du système nerveux, le processus décisionnel consiste à accumuler des informations pertinentes, appelées « évidences », de plusieurs alternatives, les comparer entre elles, et finir par commettre à la meilleure alternative compte tenu du contexte de la décision (J. I. Gold & Shadlen, 2007). Ce projet de maîtrise porte sur un sous-type particulier de prise de décisions : les décisions dites perceptuelles. Dans ce projet de recherche, j'examinerai les patrons psychophysiques (temps de réponse et taux de succès) de sujets humains prenant des décisions dans des tâches visuelles contenant des damiers dynamiques composés des carrés de couleurs. Plus précisément, l’objectif de ce projet de mémoire est d’étudier le rôle du « bruit de couleur » sur les dynamiques décisionnelles. Deux nouvelles tâches de prise de décision ont été soigneusement construites à cette fin : la première avec un niveau de bruit binaire et la seconde avec des niveaux de bruit progressifs. Les résultats de la première tâche montrent qu'en l'absence de bruit de couleur, les patrons psychophysiques des sujets sont mieux expliqués comme étant modulés par la quantité d’évidences nettes normalisées. Dans cette même tâche, l'ajout de bruit modifie systématiquement ces patrons pour qu'ils ne semblent sensibles uniquement qu'à l'évidence nette des stimuli, comme si le processus de normalisation a été éliminé. Les résultats de la deuxième tâche favorisent l’explication selon laquelle l'évidence sensorielle est progressivement normalisée en fonction du niveau de bruit présent et que la normalisation n'est pas un phénomène de tout-ou-rien dans le contexte de la prise de décision perceptuelle. Finalement, une hypothèse unificatrice est proposée selon laquelle le cerveau estime l’évidence nette et adapte dynamiquement le contexte décisionnel d’essai en essai avec une quantité estimée d’évidence potentielle totale, apparaissant comme une normalisation. / Decision-making is a general ability to choose between two or more alternatives given current information and the objectives at stake. It is generally assumed that, at the level of the nervous system, the decision-making process consists of accumulating relevant information, called "evidence", from several alternatives, comparing them to each other, and finally committing to the best alternative given the context of the decision (J. I. Gold & Shadlen, 2007). This master's project focuses on a particular subtype of decision-making so-called perceptual decisions. In this research project, I will examine the psychophysical patterns (response times and success rates) of human subjects making decisions in visual tasks containing dynamic checkerboards composed of colored squares. Specifically, the goal of this project is to study the role of "color noise" on decision dynamics. Two new decision-making tasks were carefully constructed for this purpose: the first with a binary noise level and the second with progressive noise levels. Results from the first task show that in the absence of color noise, subjects' psychophysical patterns are best explained as being modulated by the amount of normalized net evidence. In this same task, the addition of noise systematically alters these patterns so that they appear to be sensitive only to the net evidence of the stimuli, as if the normalization process has been eliminated. The results of the second task support the explanation that sensory evidence is progressively normalized as a function of the level of noise present and that normalization is not an all-or-nothing phenomenon in the context of perceptual decision-making. Finally, a unifying hypothesis is proposed that the brain estimates net evidence and dynamically adapts the decisional context from trial to trial with an estimated amount of total potential evidence, appearing as normalization.
|
130 |
[pt] NADA SOBRE NÓS, SEM NÓS: DESIGN, UM CAMINHO PARA DIMINUIR A FRAGMENTAÇÃO NO PROCESSO DE INCLUSÃO DA CRIANÇA COM TRANSTORNO DO ESPECTRO AUTISTA NO AMBIENTE DE ENSINO-APRENDIZAGEM / [en] NOTHING ABOUT US, WITHOUT US: DESIGN, A WAY TO REDUCE THE FRAGMENTATION IN THE PROCESS OF INCLUSION OF CHILDREN WITH AUTISM SPECTRUM DISORDER IN THE TEACHING-LEARNING ENVIRONMENTMARIANA NIOAC DE SALLES 19 May 2020 (has links)
[pt] A inclusão de pessoas com deficiência em ambientes de ensinoaprendizagem formais é recente. A última lei é a Lei de Inclusão Brasileira (LBI,2015). Com as mudanças, o número de alunos com Transtorno do Espectro Autista
(TEA), incluídos em ambientes de ensino-aprendizagem regulares, aumentou
significativamente nos últimos anos. Assim, como a formação de professores de
Educação Básica é anterior a LBI, levantamos o pressuposto de que ela pouco
contempla o atendimento a situações de inclusão. Estabelecemos, então, como uma
questão de pesquisa, a questão de como o Design poderia potencializar a
visibilidade de experiências de inclusão. Assumimos, assim, como objetivo,
integrar formadores e alunos em prol da constituição desses ambientes. Para tanto,
nosso percurso metodológico tomou por base a abordagem do Design Participativo,
para entendermos como essa inclusão vinha sendo realizada. Escolhemos como
campo de estudo três escolas particulares da cidade do Rio de Janeiro.
Paralelamente, visitamos ambientes de educação não formais para visualizarmos
pontos de encontro e desencontro entre a educação formal regular e a especializada.
Durante todo o percurso, tivemos a participação de pessoas com TEA. Constatamos
ser de extrema importância a participação e a interação dessas pessoas na pesquisa.
Desenvolvemos um recurso para dar visibilidade às crianças com TEA em
ambientes de ensino-aprendizagem inclusivos. Esse recurso possibilita a
visualização espacial de onde essas crianças estão sendo inseridas em sala de aula.
Assim, os professores podem compartilhar e debater com outros profissionais sobre
suas experiências. O recurso cria a oportunidade do reconhecimento de
experiências inclusivas por parte do formador e diminui a fragmentação do
processo de inclusão da criança com TEA no ambiente de ensino-aprendizagem. / [en] The inclusion of people with disabilities in formal teaching-learning
environments is recent in Brazil. The last law is the Brazilian Inclusion Law (LBI,
2015). Because of LBI, the number of students with Autism Spectrum Disorder
(ASD) has increased significantly in recent years in regular teaching-learning
environments. Thus, since the formation of Basic Education teachers precedes LBI,
we raise the assumption that it does not contemplate much of situations of inclusion.
We then established as a research question how design could enhance the visibility
of inclusion experiences. We thus aim to integrate teachers and students in favor of
the constitution of these environments. For this purpose, our methodological course
was based on the Participative Design approach to understand how this inclusion
was being carried out. We established three private schools in the city of Rio de
Janeiro. In parallel, we visited non-formal education environments to visualize
points of encounter and mismatch between regular and specialized formal
education. Throughout the course we had the participation of people with ASD. We
find it extremely important their participation and their interaction in the research.
We have developed a resource to give visibility to children with ASD in inclusive
teaching-learning environments. This feature enables the spatial view of where
these children are being placed in the classroom. With records of their practices,
teachers can share and debate with other professionals about their experiences. The
resource facilitates the recognition of inclusive experiences by the teacher and
reduces the fragmentation of the inclusion of the child with ASD in the teachinglearning environment.
|
Page generated in 0.0466 seconds