Spelling suggestions: "subject:"cisual auditory"" "subject:"cisual lauditory""
1 |
Cross modal aspects of attention in normal individuals and those with multiple sclerosisMcCarthy, Marie Majella January 1996 (has links)
No description available.
|
2 |
Cerebral asymmetries for temporal resolutionNicholls, Michael E. R. January 1993 (has links)
No description available.
|
3 |
Multisensory Processing in Simulated Driving / Feeling the Road: Multisensory Processing in Simulated DrivingPandi, Maryam January 2018 (has links)
Studies that explore integration of visual, auditory or vestibular cues, are derived from stimulus detection and discrimination tasks in which stimuli are selective and controlled. Multisensory processing is not as well understood in more dynamic and realistic tasks such as driving. As visual information is the dominant source of information when controlling a vehicle, we were interested in the contribution of auditory and physical motion (vestibular and proprioceptive) information to vehicle control. The simulated environment consisted of a straight, two-lane road and the task was to drive in the center of the right lane and maintain a constant speed, slowing down for occasional speed bumps. We examined differences in driving performance under four sets of sensory cues: visual only, visual and auditory, visual and physical motion, and visual, auditory and physical motion. The quality of visual information was manipulated across two experiments. In Experiment 1, participants drove in daylight in sunny weather, providing excellent visual information. In Experiment 2, visual information was compromised by providing dark and stormy weather conditions. In both experiments we observed an advantage of multisensory information, an effect that was enhanced when visual information was compromised. Auditory cues were especially effective in improving driver control. / Thesis / Master of Science (MSc) / Multisensory processing (combining information from different sensory systems) is not well understood in realistic tasks such as driving. A simulated environment consisted of a straight, two-lane road was used for this study. The task was to drive in the center of the right lane and maintain a constant speed, slowing down for occasional speed bumps. We examined differences in driving performance under four sets of sensory cues: visual only, visual and auditory, visual and physical motion, and visual, auditory and physical motion. The visual information was manipulated across two experiments: first, participants drove in daylight in sunny weather, providing excellent visual information. Next, visual information was compromised by providing dark and stormy weather conditions. In both experiments we observed an advantage of multisensory information, an effect that was enhanced when visual information was compromised. Auditory cues were especially effective in improving driver control.
|
4 |
Processing of Spontaneous Emotional Responses in Adolescents and Adults with Autism Spectrum Disorders Effect of Stimulus TypeCassidy, S., Mitchell, Peter, Chapman, P., Ropar, D. 04 June 2020 (has links)
Yes / Recent research has shown that adults with autism spectrum disorders (ASD) have difficulty interpreting others' emotional responses, in order to work out what actually happened to them. It is unclear what underlies this difficulty; important cues may be missed from fast paced dynamic stimuli, or spontaneous emotional responses may be too complex for those with ASD to successfully recognise. To explore these possibilities, 17 adolescents and adults with ASD and 17 neurotypical controls viewed 21 videos and pictures of peoples' emotional responses to gifts (chocolate, a handmade novelty or Monopoly money), then inferred what gift the person received and the emotion expressed by the person while eye movements were measured. Participants with ASD were significantly more accurate at distinguishing who received a chocolate or homemade gift from static (compared to dynamic) stimuli, but significantly less accurate when inferring who received Monopoly money from static (compared to dynamic) stimuli. Both groups made similar emotion attributions to each gift in both conditions (positive for chocolate, feigned positive for homemade and confused for Monopoly money). Participants with ASD only made marginally significantly fewer fixations to the eyes of the face, and face of the person than typical controls in both conditions. Results suggest adolescents and adults with ASD can distinguish subtle emotion cues for certain emotions (genuine from feigned positive) when given sufficient processing time, however, dynamic cues are informative for recognising emotion blends (e.g. smiling in confusion). This indicates difficulties processing complex emotion responses in ASD.
|
5 |
The role of edutainment in e-learning : an empirical studyAyad, Khaled A. A. January 2011 (has links)
Impersonal, non-face-to-face contact and text-based interfaces, in the e-Learning segment, present major problems that are encountered by learners, since they are out on vital personal interactions and useful feedback messages, as well as on real-time information about their learning performance. This research programme suggests a multimodal, combined with an edutainment approach, which is expected to improve the communications between users and e-Learning systems. This thesis empirically investigates users’ effectiveness; efficiency and satisfaction, in order to determine the influence of edutainment, (e.g. amusing speech and facial expressions), combined with multimodal metaphors, (e.g. speech, earcon, avatar, etc.), within e-Learning environments. Besides text, speech, visual, and earcon modalities, avatars are incorporated to offer a visual and listening realm, in online learning. The methodology used for this research project comprises a literature review, as well as three experimental platforms. The initial experiment serves as a first step towards investigating the feasibility of completing all the tasks and objectives in the research project, outlined above. The remaining two experiments explore, further, the role of edutainment in enhancing e-Learning user interfaces. The overall challenge is to enhance user-interface usability; to improve the presentation of learning, in e-Learning systems; to improve user enjoyment; to enhance interactivity and learning performance; and, also, to contribute in developing guidelines for multimodal involvement, in the context of edutainment. The results of the experiments presented in this thesis show an improvement in user enjoyment, through satisfaction measurements. In the first experiment, the enjoyment level increased by 11%, in the Edutainment (E) platform, compared to the Non-edutainment (NE) interface. In the second experiment, the Game-Based Learning (GBL) interface obtained 14% greater enhancement than the Virtual Class (VC) interface and 20.85% more than the Storytelling interface; whereas, the percentage obtained by the game incorporated with avatars increased by an extra 3%, compared with the other platforms, in the third experiment. In addition, improvement in both user performance and learning retention were detected through effective and efficiency measurements. In the first experiment, there was no significant difference between mean values of time, for both conditions (E) & (NE) which were not found to be significant, when tested using T-test. In the second experiment, the time spent in condition (GBL) was higher by 7-10 seconds, than in the other conditions. In the third experiment, the mean values of the time taken by the users, in all conditions, were comparable, with an average of 22.8%. With regards to effectiveness, the findings of the first experiment showed, generally, that the mean correct answer for condition (E) was higher by 20%, than the mean for condition (NE). Users in condition (GBL) performed better than the users in the other conditions, in the second experiment. The percentage of correct answers, in the second experiment, was higher by 20% and by 34.7%, in condition (GBL), than in the (VC) and (ST), respectively. Finally, a set of empirically derived guidelines was produced for the design of usable multimodal e-Learning and edutainment interfaces.
|
6 |
Comparaison et combinaison de rendus visuels et sonores pour la conception d'interfaces homme-machine : des facteurs humains aux stratégies de présentation à base de distorsion / Comparison and combination of visual aud audio renderings to conceive human-computer interfaces : from human factors to distortion-based presentation strategiesBouchara, Tifanie 29 October 2012 (has links)
Bien que de plus en plus de données sonores et audiovisuelles soient disponibles, la majorité des interfaces qui permettent d’y accéder reposent uniquement sur une présentation visuelle. De nombreuses techniques de visualisation ont déjà été proposées utilisant une présentation simultanée de plusieurs documents et des distorsions permettant de mettre en relief l’information plus pertinente. Nous proposons de définir des équivalents auditifs pour la présentation de plusieurs fichiers sonores en concurrence, et de combiner de façon optimale les stratégies audio et visuelles pour la présentation de documents multimédia. Afin d’adapter au mieux ces stratégies à l’utilisateur, nous avons dirigé nos recherches sur l’étude des processus perceptifs et attentionnels impliqués dans l’écoute et l’observation d’objets audiovisuels concurrents, en insistant sur les interactions entre les deux modalités.Exploitant les paramètres de taille visuelle et de volume sonore, nous avons étendu le concept de lentille grossissante, utilisée dans les méthodes focus+contexte visuelles, aux modalités auditive et audiovisuelle. A partir de ce concept, une application de navigation dans une collection de documents vidéo a été développée. Nous avons comparé notre outil à un autre mode de rendu dit de Pan&Zoom à travers une étude d’utilisabilité. Les résultats, en particulier subjectifs, encouragent à poursuivre vers des stratégies de présentation multimodales associant un rendu audio aux rendus visuels déjà disponibles.Une seconde étude a concerné l’identification de sons d’environnement en milieu bruité en présence d’un contexte visuel. Le bruit simule la présence de plusieurs sources sonores simultanées telles qu’on pourrait les retrouver dans une interface où les documents audio et audiovisuels sont présentés ensemble. Les résultats de cette expérience ont confirmé l’avantage de la multimodalité en condition de dégradation. De plus, au-delà des buts premiers de la thèse, l’étude a confirmé l’importance de la congruence sémantique entre les composantes visuelle et sonore pour la reconnaissance d’objets et a permis d’approfondir les connaissances sur la perception auditive des sons d’environnement.Finalement, nous nous sommes intéressée aux processus attentionnels impliqués dans la recherche d’un objet parmi plusieurs, en particulier au phénomène de « pop-out » par lequel un objet saillant attire l’attention automatiquement. En visuel, un objet net attire l’attention au milieu d’objets flous et certaines stratégies de présentation visuelle exploitent déjà ce paramètre visuel. Nous avons alors étendu la notion de flou aux modalités auditives et audiovisuelles par analogie. Une série d’expériences perceptives a confirmé qu’un objet net parmi des objets flous attire l’attention, quelle que soit la modalité. Les processus de recherche et d’identification sont alors accélérés quand l’indice de netteté correspond à la cible, mais ralentis quand il s’agit d’un distracteur, mettant ainsi en avant un phénomène de guidage involontaire. Concernant l’interaction intermodale, la combinaison redondante des flous audio et visuel s’est révélée encore plus efficace qu’une présentation unimodale. Les résultats indiquent aussi qu’une combinaison optimale n’implique pas d’appliquer obligatoirement une distorsion sur les deux modalités. / Although more and more sound and audiovisual data are available, the majority of access interfaces are solely based on a visual presentation. Many visualization techniques have been proposed that use simultaneous presentation of multiple documents and distortions to highlight the most relevant information. We propose to define equivalent audio technique for the presentation of several competing sound files, and optimally combine such audio and visual presentation strategies for multimedia documents. To better adapt these strategies to the user, we studied attentional and perceptual processes involved in listening and watching simultaneous audio-visual objects, focusing on the interactions between the two modalities.Combining visual size and sound level parameters, we extended the visual concept of magnifying lens to auditory and audiovisual modalities. Exploiting this concept, a navigation application in a video collection has been developed. We compared our tool with another rendering mode called Pan & Zoom through a usability study. Results, especially subjective results, encourage further research to develop multimodal presentation strategies by combining an audio rendering to the visual renderings already available.A second study concerned the identification of environmental sounds in a noisy environment in the presence of a visual context. The noise simulated the presence of multiple competing sounds as would be observed in an interface where several multimedia documents are presented together. The experimental results confirmed the multimodality advantage in condition of audio degradation. Moreover, beyond the primary goals of the thesis, this study confirms the importance of the semantic congruency between visual and auditory components for object recognition and provides deeper knowledge about the auditory perception of environmental sounds.Finally, we investigated attentional processes involved in the search of a specific object among many, especially the “pop-out” phenomenon whereby a salient object automatically attracts attention. In vision, an sharp object attracts attention among blurred objects and some visual strategies already exploit this parameter to display the information. We extended by analogy the concept of visual blur to auditory and audiovisual modalities. A serie of experiments confirmed that a perceptual object among blurred objects attracts attention, regardless of the modality. The identification and search process is then accelerated when the sharpness parameter is applied to the target, but slow when it is applied to a distractor. These results highlight an involuntary attraction effect. Concerning the crossmodal interaction, a redundant combination of audio and visual blur proved to be more effective than a unimodal presentation. Results also indicate that optimal combination does not necessarily require a distortion of both modalities.
|
7 |
Предикати перцепције у руском и српском језику / Predikati percepcije u ruskom i srpskom jeziku / Predicates of Perception in Russian and SerbianPopović Dragana 23 June 2016 (has links)
<p>Ovim se istraživanjem na primeru osnovnih predikata (glagola) percepcije ruskog i srpskog jezika odgovara na pitanja vezana za sistemske odnose u oblasti leksike, klasifikaciju jezičkih jedinica, definisanje leksema, međusobnu zavisnost značenja leksema i njihovih morfoloških i sintaksičkih obeležja. Osnovni predikati (glagoli) percepcije ruskog i srpskog jezika pozicioniraju se unutar semantičkih paradigmi, zasnovanih na interakciji diferencijalnih i zajedničkih komponenata značenja svojih članova. Članovi paradigmi izdvajaju se na osnovu kriterijuma određenih u skladu s principima organizacije centra i periferije leksičkog sistema. Pozicioniranje izdvojenih predstavnika vizuelne, auditivne, olfaktorne, gustativne i taktilne percepcije, kao i njihovih vidskih korelata, rezultira utvrđivanjem strukture paradigmi i smerova semantičke derivacije u njima.</p> / <p>This dissertation focuses on systemic relationships among the basic predicates (verbs) of perception in Russian and Serbian. It investigates issues related to the lexicon, the classification of linguistic units, the relationships between the meanings of lexemes and their morphological and syntactic features, as well as the definition of the main members of the analysed lexico-semantic group. The basic predicates of perception in Russian and Serbian are positioned within the semantic paradigms, based on the interaction of differential and general components of meaning of their members. The members of the paradigms are selected based on criteria established in accordance with the principle of the organization of lexical systems into core and periphery. The positioning of the selected representatives of visual, auditory, olfactory, gustative and tactile perception, as well as their aspectual correlates, results in determining the structure of the paradigms and the directions of semantic derivation in them.</p>
|
Page generated in 0.0497 seconds