• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 21
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 28
  • 26
  • 22
  • 15
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Developmental predictors of auditory-visual integration of speech in reverberation and noise

Wroblewski, Marcin 15 December 2017 (has links)
Objectives: Elementary school classrooms that meet the acoustic requirements for near-optimum speech recognition are extremely scarce. Poor classroom acoustics may become a barrier to speech understanding as children enter school. The purpose of this study was threefold: 1) to quantify the extent to which reverberation, lexical difficulty, and presentation mode affect speech recognition in noise, 2) to examine to what extent auditory-visual (AV) integration assists with the recognition of speech in noisy and reverberant environments typical of elementary school classrooms, 3) to understand the relationship between developing mechanisms of multisensory integration and the concurrently developing linguistic and cognitive abilities. Design: Twenty-seven typically developing children and 9 young adults participated. Participants repeated short sentences reproduced by 10 speakers on a 30” HDTV and/or over loudspeakers located around the listener in a simulated classroom environment. Signal-to-noise ratio (SNR) for 70 (SNR70) and 30 (SNR30) percent correct performance were measured using an adaptive tracking procedure. Auditory-visual integration was assessed via the SNR difference between AV and auditory-only (AO) conditions, labeled speech-reading benefit (SRB). Linguistic and cognitive aptitude was assessed using the NIH-Toolbox: Cognition Battery (NIH-TB: CB). Results: Children required more favorable SNRs for equivalent performance when compared to adults. Participants benefited from the reduction in lexical difficulty, and in most cases the reduction in reverberation time. Reverberation affected children’s speech recognition in AO condition and adults in AV condition. At SNR30, SRB was greater than that at SNR70. Adults showed marginally significant increase in AV integration relative to children. Adults also showed increase in SRB for lexically hard versus easy words, at high level of reverberation. Development of linguistic and cognitive aptitude accounts for approximately 35% of the variance in AV integration, with crystalized and fluid cognition composite scores identified as strongest predictors. Conclusions: The results of this study add to the body of evidence in support of children requiring more favorable SNRs to perform the same speech recognition tasks as adults in simulated listening environments akin to school classrooms. Our findings shed light on the development of AV integration for speech recognition in noise and reverberation during the school years, and provide insight into the balance of cognitive and linguistic underpinnings necessary for AV integration of degraded speech.
22

Intégration multisensorielle et variabilité interindividuelle / Multisensory integration and interindividual variability

Gueguen, Marc 08 December 2011 (has links)
Tout individu extrait de ses informations sensorielles des invariants directionnels lui permettant de percevoir sa propre situation spatiale ainsi que celle des objets qui l'entourent. Dans les situations habituelles, les traitements relatifs à l'orientation spatiale s'effectuent à partir de référentiels relativement redondants. Cette redondance autorise à la fois la mise en place d'une sensibilité individuelle préférentielle à l'égard d'un de ces référentiels ainsi qu'une large flexibilité dans le choix du référentiel. Lors de ce travail nous avons voulu tester deux hypothèses principales. La 1ère est que les différences interindividuelles observées lors de tâches de perception d'orientation spatiales dépendent du choix du référentiel. La 2nd hypothèse est que l'un des facteurs explicatifs de la variabilité interindividuelle lors de tâches de perception d'orientation spatiale se situe au niveau des règles d'intégration multisensorielle utilisées par les individus. Les résultats montrent que l'ensemble des cadres de références semblent pris en compte par les individus rejetant ainsi la 1ère hypothèse selon laquelle les différences interindividuelles s'expliqueraient par la capacité de certains sujets à exploiter le "bon" cadre de référence. Les résultats montrent que l'un des facteurs explicatifs des différences interindividuelles se situe dans la manière dont le système nerveux central combinerait les différentes informations, certains sujets (DC) semblant incapables de minimiser l'influence des "mauvais" cadre de référence (les moins appropriés, les plus biaisés) en diminuant leur poids respectifs. / Each person extract from sensory information directional invariant which allow him to perceive his spatial situation as well as those of objects surround him. In usual situations, processing relative to spatial orientation is carried out from relatively redundant reference. This redundancy allows both the setting up of preferential individual sensibility regarding those reference and important flexibility in choice of reference. Durant this work, we tested two main hypotheses. The first one is that observed interindividual differences during spatial orientation perception task depend on reference choice. The second one is that one of explanatory factors of interindividual differences during spatial orientation perception task be set in multisensory integration rules level used by the person. Results show that all frame of reference seem to consider by the people rejecting the first hypothesis according to which interindividual differences be explained by the ability of some subjects to exploit the "good" frame of reference. Results show that one of explanatory factors of interindividual differences is set in the way of central nervous system combine several information, some subjects (DC) seems unable to minimize "wrong" frame of reference influence (the least appropriate, the most biased) by reducing respective weight.
23

Cognitive resources in audiovisual speech perception

BUCHAN, JULIE N 11 October 2011 (has links)
Most events that we encounter in everyday life provide our different senses with correlated information, and audiovisual speech perception is a familiar instance of multisensory integration. Several approaches will be used to further examine the role of cognitive factors on audiovisual speech perception. The main focuses of this thesis will be to examine the influences of cognitive load and selective attention on audiovisual speech perception, as well as the integration of auditory and visual information in talking distractor faces. The influence of cognitive factors on the temporal integration of auditory and visual speech, and gaze behaviour during audiovisual speech will also be addressed. The overall results of the experiments presented here suggest that the integration of auditory and visual speech information is quite robust to various attempts to modulate the integration. Adding a cognitive load task shows minimal disruption of the integration of auditory and visual speech information. Changing attentional instructions to get subjects to selectively attend to either the auditory or visual speech information also has a rather modest influence on the observed integration of auditory and visual speech information. Generally, the integration of temporally offset auditory and visual information seems rather insensitive to cognitive load or selective attentional manipulations. The processing of visual information from distractor faces seems to be limited. The language of the visually articulating distractors doesn't appear to provide information that is helpful for matching together the auditory and visual speech streams. Audiovisual speech distractors are not really any more distracting than auditory distractor speech paired with a still image, suggesting a limited processing or integration of the visual and auditory distractor information. The gaze behaviour during audiovisual speech perception appears to be relatively unaffected by an increase in cognitive load, but is somewhat influenced by attentional instructions to selectively attend to the auditory and visual information. Additionally, both the congruency of the consonant, and the temporal offset of the auditory and visual stimuli have small but rather robust influences on gaze. / Thesis (Ph.D, Psychology) -- Queen's University, 2011-09-30 23:31:07.754
24

Étude psychophysique d'une illusion visuelle induite par le son

Éthier-Majcher, Catherine January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
25

Multisensory integration of redundant and complementary cues

Hartcher-O'Brien, Jessica January 2012 (has links)
During multisensory integration, information from distinct sensory systems that refers to the same physical event is combined. For example, the sound and image that an individual generates as s/he interacts with the world, will provide the nervous system with multiple cues which can be integrated to estimate the individual’s position in the environment. However, the information that is perceived through different sensory pathways/systems can be qualitatively different. The information can be redundant and describe the same property of an event in a common reference frame (i.e., the image and sound referring to the individual’s location), or it can be complementary. Combining complementary information can be advantageous in that it extends the range and richness of the information available to the nervous system, but can also be superfluous and unnecessary to the task at hand – i.e. olfactory cues about the individuals perfume can increase the richness of the representation but not necessarily aid in localisation. Over the last century or so, a large body of research has focused on different aspects of multisensory interactions at both the behavioural and neural levels. It is currently unclear whether the mechanisms underlying multisensory interactions for both type of cue are similar or not. Moreover, the evidence for differences in behavioural outcome, dependent on the nature of the cue, is growing. Such cue property effects possibly reflect a processing heuristic for more efficient parsing of the vast amount of sensory information available to the nervous system at any one time. The present thesis assesses the effects of cue properties (i.e., redundant or complementary) on multisensory processing and reports a series of experiments demonstrating that the nature of the cue, defined by the task of the observer, influences whether the cues compete for representation as a result of interacting, or whether instead multisensory information produces an optimal increase in reliability of the event estimate. Moreover, a bridging series of experiments demonstrate the key role of redundancy in inferring that two signals have a common physical cause and should be integrated, despite conflict in the cues. The experiments provide insights into the different strategies adopted by the nervous system and some tentative evidence for possible, distinct underlying mechanisms.
26

Etude des interactions multi-sensorielle pour la perception des mouvements du véhicule en simulateur dynamique : contribution de l'illusion somatogravique à l'immersion en environnement virtuel

Stratulat, Anca 06 October 2011 (has links)
Les simulateurs de conduite permettent d’explorer certains domaines de recherche difficiles à appréhender en conditions réelles, comme l'intégration de différents signaux sensoriels (ex. visuel, vestibulaire, somesthésique) pour la perception du mouvement. Malgré leur complexité, les simulateurs de conduite ne produisent pas toujours une sensation de conduite réelle, spécialement dans les situations comportant des freinages ou des virages. Leurs limites mécaniques en sont la cause. En conséquence, les lois de mouvement des simulateurs sont basées sur la technique de la « tilt-coordination ». Cette technique consiste à incliner un véhicule de telle sorte que la force gravitationnelle soit équivalente à l’accélération gravito-inertielle (GIA) résultant d’une accélération linéaire. La « tilt-coordination » se base sur l'ambigüité perçue par le système vestibulaire entre un basculement et une translation. Sur simulateur de conduite, l'algorithme « washout » combine la « tilt-coordination » à des translations pour produire une sensation d'accélération linéaire. L'objectif de ces travaux de recherche est d'atteindre une meilleure compréhension de l'intégration multisensorielle pour la perception des accélérations linéaires en simulateur de conduite. Les expériences présentées ci-dessous montrent que la perception des décélérations linéaires dépend de la manière dont le basculement et la translation sont combinés pour produire une perception cohérente. Par ailleurs, nos résultats montrent qu'il y a une différence importante dans la perception des accélérations et des décélérations. Pour le freinage, le rapport basculement/translation le plus réaliste dépend du niveau de décélération. Pour l'accélération, le mouvement est généralement surestimé et dépend du niveau d'accélération. Dans ce cas, la perception ne dépend pas du rapport basculement/translation. Ces résultats suggèrent que les signaux visuels, vestibulaires et somesthésiques sont intégrés de façon Bayésienne. En conclusion, il n'est pas conseillé d'utiliser l'algorithme « washout » sans prendre en compte la non-linéarité de la perception humaine. Nous proposons un modèle qui décrit la relation entre le basculement, la translation et le niveau d'accélération ou décélération souhaité. Ce modèle peut être utilisé pour améliorer la loi du mouvement afin de produire des simulations de conduite plus réalistes. / Driving simulators allow the exploration of certain areas of research that are difficult to reach in normal conditions, like the integration of different sensory inputs (visual, vestibular and somesthesic) for perception of self-motion. In spite of their complexity, driving simulators do not produce a realistic sensation of driving, especially for braking and turnings. This is due to their mechanical limitations. As a consequence, driving simulators' motion algorithm is based on tilt-coordination technique, which assumes the tilt of the car so that the driver's force of gravity is oriented in the same way as the gravito-inertial acceleration (GIA) during a linear acceleration. This technique is based on the tilt-translation ambiguity of the vestibular system and is used on dynamic driving simulators in combination with linear translations in so-called washout algorithm, to produce a sensation of linear acceleration. The aim of the present research is to understand how humans use multiple sensory signals (vestibular, visual and somatosensory) during the perception of linear acceleration on a driving simulator. The conducted experiments show that the perception of motion depends on the manner tilt and translation are used together to provide a unified percept of linear acceleration. Further, our results show that there is an important difference on how humans perceive accelerations and decelerations. For braking, the most realistic tilt/translation ratio depends on the level of deceleration. For acceleration, the motion is generally overestimated and depends on the level of acceleration, but not on the variation of tilt/translation ratio. The results suggest that visual, vestibular and proprioceptive cues are integrated in an optimal Bayesian fashion. In conclusion, it is not advisable to use a washout algorithm without taking into account the non-linearity of human perception. We propose an empirically found data-driven fitting model that describes the relationship between tilt, translation and the desired level of acceleration or deceleration. This model is intended to be a supplement to motion cueing algorithms that should improve the realism of driving simulations.
27

Étude des règles de l'intégration multisensorielle en kinesthésie et de leur évolution liée à l’âge : approches psychophysiques & modélisation bayésienne / Study of multisensory integration rules in kinaesthesia and their evolution related to aging : psychophysics & bayesian modelling

Chancel, Marie 05 December 2016 (has links)
Au quotidien, notre système nerveux central utilise et intègre de multiples informations sensorielles pour percevoir de façon cohérente et efficace les mouvements de notre corps et de nos membres. Une littérature abondante a décrit chacune des contributions des entrées sensorielles visuelle, proprioceptive musculaire ou encore tactile dans l’émergence de cette perception, appelée kinesthésie. Mais des questions demeurent quant aux principes régissant l’intégration de ces différents sens à des fins kinesthésiques. Cette thèse apporte des éléments de réponse en soulignant la contribution majeure de la proprioception musculaire dans la construction des percepts multisensoriels kinesthésiques chez l’adulte jeune. Compte tenu du déclin de toutes les modalités sensorielles au cours du vieillissement, nous avons étudié l’évolution possible de ces règles d’intégration dans la dernière partie de ce travail.Revisitant un phénomène illusoire classiquement considéré comme d’origine purement visuelle, le paradigme miroir, nous montrons tout d’abord l’influence des afférences proprioceptives musculaires controlatérales dans l’estimation visuo-proprioceptive des mouvements d’un segment corporel, le bras. En effet, nos résultats montrent que cette illusion de mouvement d’un bras caché derrière un miroir, crée par la réflexion du bras controlatéral en mouvement dans ce miroir, émerge de la prise en compte des informations visuelles mais également des afférences proprioceptives bilatérales. Dans un deuxième temps, pour estimer plus précisément les contributions relatives et interactions entre différentes modalités, nous appliquons des stimulations sensorielles spécifiques des entrées visuelles, tactiles et/ou proprioceptives musculaires, induisant des sensations illusoires de mouvement de la main chez des sujets parfaitement immobiles. En combinant méthode psychophysique et modélisation bayésienne, nous démontrons que l’intégration des informations visuelles et tactiles permet d’optimiser la capacité à discriminer la vitesse des mouvements de la main ; Toutefois, la perception kinesthésique demeure biaisée en faveur des informations proprioceptives musculaires chez des sujets adultes jeunes. Enfin, dans une perspective développementale, nous mettons en évidence un changement de pondération relative entre ces trois différentes entrées au cours du vieillissement en faveur des afférences d’origine tactile et visuelle, probablement dû à une altération plus importante de la sensibilité proprioceptive musculaire. / In our daily live, our central nervous system uses and integrates multiples sensory information to efficiently and accurately perceive self-body and self-limb movements. An important part of the studies on this perception, called kinesthesia, address the question of the multisensory integration principles in this particular perceptive domain. This thesis brings up some answers to this question, enlightening muscle proprioception major contribution to the elaboration of multisensory kinesthetic percepts in young adults and changes occurring with age in the rules governing the integration of vision touch and muscle proprioception in kinesthesia.Revisiting a classical illusory phenomenon implicitly supposed of visual origin, the mirror paradigm, we investigate how the contralateral muscle proprioceptive afferences contribute to the visuo-proprioceptive estimation of self-arm movements. Indeed, the movement illusion of an arm, hidden behind a mirror, created by the reflection of the contralateral arm moving in this mirror seems to emerge from the integration of visual and bilateral proprioceptive feedbacks, as attested by our results. Then, in order to estimate the relative contribution and interaction of vision, touch, and muscle proprioception, we applied specific sensory stimulations on muscle proprioception, touch and/or vision. These stimulations elicit in perfectly still participants self-hand movement illusions. Combining psychophysics and Bayesian approach, we demonstrate that visual and tactile cue integration leads to an optimization of our ability to discriminate self-hand rotation velocity. Nevertheless, kinesthesia remains biased in favor of proprioceptive cues. Finally, knowing that all sensory systems decline across life span, we study the potential evolution of multisensory integration rules in kinesthesia with age. We show a profound reshaping in the weighting of the three sensory entries in older individuals, in favor of touch and vision, probably due to a relative greater impairment of muscle proprioception.
28

MUSIC TO OUR EYES: ASSESSING THE ROLE OF EXPERIENCE FOR MULTISENSORY INTEGRATION IN MUSIC PERCEPTION

Graham, Robert Edward 01 December 2017 (has links)
Based on research on the “McGurk Effect” (McGurk & McDonald, 1976) in speech perception, some researchers (e.g. Liberman & Mattingly, 1985) have argued that humans uniquely interpret auditory and visual (motor) speech signals as a single intended audiovisual articulatory gesture, and that such multisensory integration is innate and specific to language. Our goal for the present study was to determine if a McGurk-like Effect holds true for music perception as well, as a domain for which innateness and experience can be disentangled more easily than in language. We sought to investigate the effects of visual musical information on auditory music perception and judgment, the impact of music experience on such audiovisual integration, and the possible role of eye gaze patterns as a potential mediator for music experience and the extent of visual influence on auditory judgments. 108 participants (ages 18-40) completed a questionnaire and melody/rhythm perception tasks to determine music experience and abilities, and then completed speech and musical McGurk tasks. Stimuli were recorded from five sounds produced by a speaker or musician (cellist and trombonist) that ranged incrementally along a continuum from one type to another (e.g. non-vibrato to strong vibrato). In the audiovisual condition, these sounds were paired with videos of the speaker/performer producing one type of sound or another (representing either end of the continuum) such that the audio and video matched or mismatched to varying degrees. Participants indicated, on a 100-point scale, the extent to which the auditory presentation represents one end of the continuum or the other. Auditory judgments for each sound were then compared based on their visual pairings to determine the impact of visual cues on auditory judgments. Additionally, several types of music experience were evaluated as potential predictors of the degree of influence visual stimuli had on auditory judgments. Finally, eye gaze patterns were measured in a different sample of 15 participants to assess relationships between music experience and eye gaze patterns, and eye gaze patterns and extent of visual on auditory judgments. Results indicated a reliable “musical McGurk Effect” in the context of cello vibrato sounds, but weaker overall effects for trombone vibrato sounds and cello pluck and bow sounds. Limited evidence was found to suggest that music experience impacts the extent to which individuals are influenced by visual stimuli when making auditory judgments. The support that was obtained, however, indicated the possibility for diminished visual influence on auditory judgments based on variables associated with music “production” experience. Potential relationships between music experience and eye-gaze patterns were identified. Implications for audiovisual integration in the context of speech and music perception are discussed, and future directions advised.
29

Investigating Compensatory Mechanisms for Sound Localization: Visual Cue Integration and the Precedence Effect

January 2015 (has links)
abstract: Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources. / Dissertation/Thesis / Masters Thesis Bioengineering 2015
30

Représentation du corps et anorexie mentale : de l’intégration sensorielle à l’action : approche neurocognitive du phénomène de distorsion corporelle / Body schema and anorexia nervosa : from sensory integration to action : neurocognitive approach of body distortion

Guardia, Dewi 21 December 2012 (has links)
La capacité de juger ses propres actions se trouve être perturbée dans l'anorexie mentale (AM). Les patientes souffrant d’anorexie mentale surestime notamment le seuil de passabilité d’une ouverture (par rapport à un groupe témoin) lors d’une simulation ou d’un franchissement d’ouverture réelle. Ces données sont en accord avec les plaintes des patientes qui se sentent plus grosses qu’elles ne le sont en réalité. Le jugement des patientes est perturbé lorsqu’elles adoptent une perspective en première personne (j’effectue l’action), mais reste préservé lorsque la perspective est en troisième personne (je regarde un sujet effectuer l’action). Ces résultats suggèrent une atteinte spécifique du schéma corporel et non d’une perturbation globale des jugements perceptifs.Cette surestimation du schéma corporel dans l'AM pourrait être liée à l'existence d'un trouble de l'intégration multisensorielle, l’élaboration d’un schéma corporel harmonieux résultant de l'intégration des afférences visuelles, tactiles, proprioceptives et vestibulaires. Une corrélation existe entre la baisse des performances comportementales et l’intensité des troubles de l'alimentation, qu’il s’agisse de la recherche de minceur, des préoccupations corporelles et de l’insatisfaction générée. Les perturbations corporelles ainsi que les répercussions comportementales engendrées pourraient induire un renforcement des comportements alimentaires restrictifs.Les performances des patientes sont liées à la fois à leur perte de poids au cours des mois précédents et à leur poids avant la décompensation. Ce résultat pourrait appuyer l’hypothèse d’un défaut d’actualisation du schéma corporel, les modifications morphologiques engendrées par une perte de poids rapide et massive n’étant pas prises en compte par le système nerveux central. L'AM touche essentiellement les jeunes femmes entre 15 et 19 ans. De véritables bouleversements physiologiques et psychologiques se produisent lors de la puberté, ayant un impact sur le schéma corporel. Les variations de poids induites par les troubles du comportement alimentaire pourraient venir renforcer ces perturbations. L'étude des phénomènes neurologiques, tel que le syndrome du membre fantôme, pourraient faire la lumière sur ce point. En effet, de nombreuses personnes amputées continuent à ressentir la présence d'un membre fantôme après amputation. Beaucoup de modèles explicatifs ont émergé ces dernières années. L'un d'eux postule une certaine inadéquation entre la rétroaction sensorielle du fantôme et les régions corticales représentant le membre. Dans l’AM, un conflit similaire pourrait se produire entre un schéma corporel antérieur n’ayant pas pris en compte les variations pondérales et la rétroaction sensori-motrice. Ainsi, les patientes se trouveraient enfermés dans un corps plus gros. / The everyday human ability to make judgments about one’s own and other people’s body-scaled actions is disrupted in anorexia nervosa (AN). AN patients significantly overestimated their own passability (relative to a control group) in a simulated body-scaled action. These data were concordant with the patients' clinical complaints that they feel larger than they really are. Judgments in AN patients were significantly affected in the first-person-perspective condition (1PP) but not in the third-person-perspective condition (3PP; see Figure). These overall results suggest that the overestimation of the passability ratios in AN are likely to be caused by an overestimation of their own body schema. They are not symptomatic of a general impairment in perceptual judgments.This overestimation of the body schema in AN can be related to the existence of disturbance in multisensory integration in AN, since the body schema is the product of multisensory integration of visual, tactile, proprioceptive and vestibular inputs. A significant relationship exist between the behavioural performances and the severity of eating disorders by revealing a significant, positive correlation between the patient’s own body action on one hand and body concern, body dissatisfaction and drive for thinness on the other. This disruption causes restrictive eating behaviours to persist.The patients\\\\\\\' performances were related to their body weight loss over the previous months and to their pre-AN body weight. This finding provides a possible explanation for the disruption of body-scaled actions in anorexic people: the body schema modified by the rapid weight loss may not have been updated by the central nervous system. Anorexia nervosa mainly affects young women in the 15-19 age group. However, many of the neurological, morphological and psychological changes occur during puberty and they will have an impact on the body schema. Weight changes induced by eating disorders could reinforce these disturbances. The knowledge gained by studying neurological phenomena such as phantom limbs might shed light on this topic. In fact, many amputees continue to feel the presence of a phantom limb after amputation. Many explanatory models of phantom limb syndrome have emerged in recent years. One of these postulates a degree of mismatch between the sensory feedback from the phantom and the cortical regions representing the limb. In anorexic patients, there could be a conflict between the previous body schema (i.e. before the weight change) and the current sensorimotor feedback. Thus, patients would find themselves locked into a larger body.

Page generated in 0.1275 seconds