• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Applications of Crossmodal Relationships in Interfaces for Complex Systems: A Study of Temporal Synchrony

Giang, Wayne Chi Wei January 2011 (has links)
Current multimodal interfaces for complex systems, such as those designed using the Ecological Interface Design (EID) methodology, have largely focused on effective design of interfaces that treat each sensory modality as either an independent channel of information or as a way to provide redundant information. However, there are many times when operationally related information is presented in different sensory modalities. There is very little research that has examined how this information in different modalities can be linked at a perceptual level. When related information is presented through multiple sensory modalities, interface designers will require perceptual methods for linking relevant information together across modalities. This thesis examines one possible crossmodal perceptual relationship, temporal synchrony, and evaluates whether the relationship is useful in the design of multimodal interfaces for complex systems. Two possible metrics for the evaluation of crossmodal perceptual relationships were proposed: resistance to changes in workload, and stream monitoring awareness. Two experiments were used to evaluate these metrics. The results of the first experiment showed that temporal rate synchrony was not resistant to changes in workload, manipulated through a secondary visual task. The results of the second experiment showed that participants who used crossmodal temporal rate synchrony to link information in a multimodal interface did not achieve better performance in the monitoring of the two streams of information being presented over equivalent unimodal interfaces. Taken together, these findings suggest that temporal rate synchrony may not be an effective method for linking information across modalities. Crossmodal perceptual relationships may be very different from intra-modal perceptual relationships. However, methods for linking information across sensory modalities are still an important goal for interface designers, and a key feature of future multimodal interface design for complex systems.
2

Applications of Crossmodal Relationships in Interfaces for Complex Systems: A Study of Temporal Synchrony

Giang, Wayne Chi Wei January 2011 (has links)
Current multimodal interfaces for complex systems, such as those designed using the Ecological Interface Design (EID) methodology, have largely focused on effective design of interfaces that treat each sensory modality as either an independent channel of information or as a way to provide redundant information. However, there are many times when operationally related information is presented in different sensory modalities. There is very little research that has examined how this information in different modalities can be linked at a perceptual level. When related information is presented through multiple sensory modalities, interface designers will require perceptual methods for linking relevant information together across modalities. This thesis examines one possible crossmodal perceptual relationship, temporal synchrony, and evaluates whether the relationship is useful in the design of multimodal interfaces for complex systems. Two possible metrics for the evaluation of crossmodal perceptual relationships were proposed: resistance to changes in workload, and stream monitoring awareness. Two experiments were used to evaluate these metrics. The results of the first experiment showed that temporal rate synchrony was not resistant to changes in workload, manipulated through a secondary visual task. The results of the second experiment showed that participants who used crossmodal temporal rate synchrony to link information in a multimodal interface did not achieve better performance in the monitoring of the two streams of information being presented over equivalent unimodal interfaces. Taken together, these findings suggest that temporal rate synchrony may not be an effective method for linking information across modalities. Crossmodal perceptual relationships may be very different from intra-modal perceptual relationships. However, methods for linking information across sensory modalities are still an important goal for interface designers, and a key feature of future multimodal interface design for complex systems.
3

Level of detail for granular audio-graphic rendering : representation, implementation, and user-based evaluation

Ding, Hui 30 September 2013 (has links) (PDF)
Real-time simulation of complex audio-visual scenes remains challenging due to the technically independent but perceptually related rendering process in each modality. Because of the potential crossmodal dependency of auditory and visual perception, the optimization of graphics and sound rendering, such as Level of Details (LOD), should be considered in a combined manner but not as separate issues. For instance, in audition and vision, people have perceptual limits on observation quality. Techniques of perceptually driven LOD for graphics have been greatly advanced for decades. However, the concept of LOD is rarely considered in crossmodal evaluation and rendering. This thesis is concentrated on the crossmodal evaluation of perception on audiovisual LOD rendering by psychophysical methods, based on that one may apply a functional and general method to eventually optimize the rendering. The first part of the thesis is an overview of our research. In this part, we review various LOD approaches and discuss concerned issues, especially from a crossmodal perceptual perspective. We also discuss the main results on the design, rendering and applications of highly detailed interactive audio and graphical scenes of the ANR Topophonie project, in which the thesis took place. A study of psychophysical methods for the evaluation on audio-visual perception is also presented to provide a solid knowledge of experimental design. In the second part, we focus on studying the perception of image artifacts in audio-visual LOD rendering. A series of experiments was designed to investigate how the additional audio modality can impact the visual detection of artifacts produced by impostor-based LOD. The third part of the thesis is focused on the novel extended-X3D that we designed for audio-visual LOD modeling. In the fourth part, we present a design and evaluation of the refined crossmodal LOD system. The evaluation of the audio-visual perception on crossmodal LOD system was achieved through a series of psychophysical experiments. Our main contribution is that we provide a further understanding of crossmodal LOD with some new observations, and explore it through perceptual experiments and analysis. The results of our work can eventually be used as the empirical evidences and guideline for a perceptually driven crossmodal LOD system.
4

Level of detail for granular audio-graphic rendering : representation, implementation, and user-based evaluation / Niveau de détail pour le rendu audio-graphique granulaire : la représentation, l’implémentation, l’évaluation basée sur les utilisateurs

Ding, Hui 30 September 2013 (has links)
Simulation en temps réel de scènes audio-visuelles complexes reste difficile en raison du processus de rendu techniquement indépendant mais perceptivement lié à chaque modalité. En raison de la dépendance cross-modale potentiel de la perception auditive et visuelle, l'optimisation de graphiques et de rendu sonore, tels que le niveau de détail (LOD), doit être considéré de manière combinée, mais pas comme des questions distinctes. Par exemple, dans l'audition et de la vision, les gens ont des limites perceptives sur la qualité de l'observation. Techniques de LOD conduit par la perception pour les graphismes ont été grandement progressé depuis des décennies. Cependant, le concept de LOD est rarement pris en compte dans l'évaluation et le rendu crossmodal. Cette thèse porte sur l'évaluation de la perception crossmodale sur le rendu LOD audiovisuel par des méthodes psychophysiques, sur lequel on peut appliquer une méthode fonctionnelle et générale, à terme, d'optimiser le rendu. La première partie de la thèse est une étude des problématiques. Dans cette partie, nous passons en revue les différentes approches LOD et discutons les issues, en particulier du point de vue au niveau de la perception crossmodale. Nous discutons également les résultats principaux sur le design, le rendu et les applications interactives des scènes audio et graphiques dans le cadre du projet ANR Topophonie dont la thèse a eu lieu. Une étude des méthodes psychophysiques pour l'évaluation de la perception audio-visuelle est également présentée afin de fournir une solide connaissance du design expérimentale. Dans la deuxième partie, nous nous concentrons sur l'étude de la perception des artefacts d'image dans le rendu LOD audio-visuel. Une série d'expériences a été conçue pour étudier comment la modalité audio supplémentaire peut influer sur la détection visuelle des artefacts produits par la méthode LOD d’imposteur. La troisième partie de la thèse est axée sur le X3D étendu que nous avons conçu pour la modélisation de LOD audio-visuel. Dans la dernière partie, nous présentons le design et l'évaluation du système original par le rendu LOD crossmodal. L'évaluation de la perception audio-visuelle sur le système LOD crossmodal a été atteinte grâce à une série d'expériences psychophysiques. Notre contribution principale est que nous offrons une compréhension originale de LOD crossmodal avec de nouvelles observations, et l'explorer par des expériences et des analyses perceptives. Les résultats de notre travail peuvent être, éventuellement, les preuves empiriques et des lignes directrices pour un système de rendu LOD crossmodale conduit par la perception. / Real-time simulation of complex audio-visual scenes remains challenging due to the technically independent but perceptually related rendering process in each modality. Because of the potential crossmodal dependency of auditory and visual perception, the optimization of graphics and sound rendering, such as Level of Details (LOD), should be considered in a combined manner but not as separate issues. For instance, in audition and vision, people have perceptual limits on observation quality. Techniques of perceptually driven LOD for graphics have been greatly advanced for decades. However, the concept of LOD is rarely considered in crossmodal evaluation and rendering. This thesis is concentrated on the crossmodal evaluation of perception on audiovisual LOD rendering by psychophysical methods, based on that one may apply a functional and general method to eventually optimize the rendering. The first part of the thesis is an overview of our research. In this part, we review various LOD approaches and discuss concerned issues, especially from a crossmodal perceptual perspective. We also discuss the main results on the design, rendering and applications of highly detailed interactive audio and graphical scenes of the ANR Topophonie project, in which the thesis took place. A study of psychophysical methods for the evaluation on audio-visual perception is also presented to provide a solid knowledge of experimental design. In the second part, we focus on studying the perception of image artifacts in audio-visual LOD rendering. A series of experiments was designed to investigate how the additional audio modality can impact the visual detection of artifacts produced by impostor-based LOD. The third part of the thesis is focused on the novel extended-X3D that we designed for audio-visual LOD modeling. In the fourth part, we present a design and evaluation of the refined crossmodal LOD system. The evaluation of the audio-visual perception on crossmodal LOD system was achieved through a series of psychophysical experiments. Our main contribution is that we provide a further understanding of crossmodal LOD with some new observations, and explore it through perceptual experiments and analysis. The results of our work can eventually be used as the empirical evidences and guideline for a perceptually driven crossmodal LOD system.

Page generated in 0.1166 seconds