• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 8
  • 1
  • Tagged with
  • 18
  • 18
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An empirical investigation in using multi-modal metaphors to browse internet search results : an investigation based upon experimental browsing platforms to examine usability issues of multi-nodal metaphors to communicate internet-based search engine results

Ciuffreda, Antonio January 2008 (has links)
This thesis explores the use of multimodality to communicate retrieved results of Internet search engines. The investigation aimed to investigate suitable multimodal metaphors which would increase the level of usability of Internet search engine interfaces and enhance users` experience in the search activity. The study consisted of three experiments based on questionnaires and Internet search activities with a set of text-based and multimodal interfaces. These interfaces were implemented in two browsing platforms, named AVBRO and AVBRO II. In the first experiment, the efficiency of specific multimodal metaphors to communicate additional information of retrieved results was investigated. The experiment also sought to obtain users` views of these metaphors with a questionnaire. An experimental multimodal interface of the AVBRO platform, which communicated additional information with a combination of three 2D graphs and musical stimuli, was used as a basis for the experiment, together with the Google search engine. The results obtained led to the planning of a second experiment. The aim of this experiment was to obtain and compare the level of usability of four different experimental multimodal interfaces and one traditional text-based interface, all implemented in the AVBRO II platform. Effectiveness, efficiency and users` satisfaction were used as criteria to evaluate the usability of these interfaces. In the third and final experiment the usability analysis of a traditional text-based interface and the two most suitable experimental multimodal interfaces of the AVBRO II platform was further investigated. Learnability, errors rate, efficiency, memorability and users` satisfaction were used as criteria to evaluate the usability of these interfaces. The analysis of the results obtained from these experiments provided the basis for a set of design guidelines for the development of usable interfaces based on a multimodal approach.
2

Uma abordagem para projeto de aplicações com interação multimodal da Web / An approach to design Web multimodal interfaces

Talarico Neto, Americo 14 April 2011 (has links)
O principal objetivo do desenvolvimento de aplicações multimodais é possibilitar uma maneira mais natural dos seres humanos se comunicarem com as máquinas, por meio de interfaces mais eficientes, intuitivas, fáceis de usar e, de certa forma, mais inteligentes. No entanto, a literatura da área mostra que a reutilização, tanto de conhecimento como de código fonte, ainda apresenta problemas, dados a complexidade do código em sistemas multimodais, a falta de mecanismos eficientes de testes de usabilidade e a dificuldade em se gerenciar a captura, o armazenamento e a recuperação de conhecimento de projeto. Nesta tese argumenta-se que a utilização de uma abordagem sistemática, centrada no usuário, apoiada por uma ferramenta computacional e com um modelo bem definido que permita o desenvolvimento de interfaces multimodais com a reutilização de Design Rationale, aumenta e melhora os níveis de usabilidade, promove a identificação e utilização de padrões de projeto e o reúso de componentes. Para demonstrar esta tese, apresenta-se neste texto a abordagem para o desenvolvimento de interfaces multimodais Web, MMWA, e o seu ambiente de autoria, o MMWA-ae, ambos compostos por atividades que auxiliam a equipe de projeto durante as fases de projeto, desenvolvimento e avaliações de usabilidade. São discutidos também os resultados obtidos com a execução de três estudos de caso, realizados no ambiente acadêmico, nos quais se buscou determinar a viabilidade da abordagem e os benefícios que podem ser alcançados com a combinação de diferentes técnicas, a saber: design rationale, padrões de projeto, modelagem de tarefas, componentes de software, princípios de usabilidade, avaliações heurísticas, testes com usuários, regras de associação, entre outras. Os resultados evidenciam que a abordagem e seu ambiente de autoria podem proporcionar diferentes benefícios para organizações que desenvolvem sistemas multimodais, incluindo o aumento da usabilidade e consequentemente da qualidade do produto, bem como a diminuição de custos e da complexidade do desenvolvimento com a reutilização de código e de conhecimento capturado em projetos anteriores / The main goal of developing multimodal applications is to enable a more natural way of communication between human beings and machines through interfaces that are more efficient, intuitive, easier to use and, in a certain way, more intelligent. However, the literature shows that the reuse of both knowledge and source code still presents problems, given the complexity of the code in multimodal systems, the lack of efficient mechanisms to test the usability and the difficulty in managing the capture, the storage and the recovery of design knowledge. In this thesis it is discussed that the use of a systematic approach, usercentered, supported by a computer tool and with a well defined model that allows the development of multimodal interfaces with the reuse of DR, increases and improves the usability levels, promotes the identification and the use of design patterns and the reuse of components. To demonstrate this thesis, it is shown in this text an approach to develop Web multimodal interfaces (MMWA) and its authoring environment (MMWA-ae), both composed of activities that help the design team during the different project phases: design, development and usability evaluation. We also discuss in this thesis the results obtained with the execution of three case studies, executed in the academic environment, which aimed to determine the feasibility of the approach and the benefits that can be achieved with the combination of different techniques, such as: design rationale, design patterns, tasks model, software components, usability principles, heuristic evaluations, user testing, association rules, among others. The results show clearly that the approach and its author environment can provide different benefits to organizations that develop multimodal systems, including the usability improvement and, consequently, the quality of the product, as well as the decrease of costs and complexity since it encompasses the development with reused code and design knowledge captured in previous projects
3

An Adaptive Approach to Exergames with Support for Multimodal Interfaces

Silva Salmeron, Juan Manuel 30 January 2013 (has links)
Technology such as television, computers, and video games are often in the line for reasons of why people lack physical activity and tend to gain weight and become obese. In the case of video games, with the advent of the so called “serious games initiative”, a new breed of video games have come into place. Such games are called “exergames” and they are intended to motivate the user to do physical activity. Although there is some evidence that some types of Exergames are more physically demanding than traditional sedentary games, there is also evidence that suggests that such games are not really providing the intensity of exert that is at the recommended levels for a daily exercise. Currently, most exergames have a passive approach. There is no real tracking of the players progress, there is no assessment of his/her level of exert, no contextual information, and there is no adaptability on the game itself to change the conditions of the game and prompt the desired physiological response on the player. In this thesis we present research work done towards the design and development of an architecture and related systems that support a shift in the exertion game paradigm. The contributions of this work are enablers in the design and development of exertion games with a strict serious game approach. Such games should have “exercising” as the primary goal, and a game engine that has been developed under this scheme should be aware of the exertion context of the player. The game should be aware of the level of exertion of the player and adapt the gaming context (in-game variables and exertion interface settings) so that the player can reach a predefined exertion rate as desired. To support such degree of adaptability in a multimedia, multimodal system, we have proposed a system architecture that lays down the general guidelines for the design and development of such systems.
4

Applications of Crossmodal Relationships in Interfaces for Complex Systems: A Study of Temporal Synchrony

Giang, Wayne Chi Wei January 2011 (has links)
Current multimodal interfaces for complex systems, such as those designed using the Ecological Interface Design (EID) methodology, have largely focused on effective design of interfaces that treat each sensory modality as either an independent channel of information or as a way to provide redundant information. However, there are many times when operationally related information is presented in different sensory modalities. There is very little research that has examined how this information in different modalities can be linked at a perceptual level. When related information is presented through multiple sensory modalities, interface designers will require perceptual methods for linking relevant information together across modalities. This thesis examines one possible crossmodal perceptual relationship, temporal synchrony, and evaluates whether the relationship is useful in the design of multimodal interfaces for complex systems. Two possible metrics for the evaluation of crossmodal perceptual relationships were proposed: resistance to changes in workload, and stream monitoring awareness. Two experiments were used to evaluate these metrics. The results of the first experiment showed that temporal rate synchrony was not resistant to changes in workload, manipulated through a secondary visual task. The results of the second experiment showed that participants who used crossmodal temporal rate synchrony to link information in a multimodal interface did not achieve better performance in the monitoring of the two streams of information being presented over equivalent unimodal interfaces. Taken together, these findings suggest that temporal rate synchrony may not be an effective method for linking information across modalities. Crossmodal perceptual relationships may be very different from intra-modal perceptual relationships. However, methods for linking information across sensory modalities are still an important goal for interface designers, and a key feature of future multimodal interface design for complex systems.
5

An Adaptive Approach to Exergames with Support for Multimodal Interfaces

Silva Salmeron, Juan Manuel 30 January 2013 (has links)
Technology such as television, computers, and video games are often in the line for reasons of why people lack physical activity and tend to gain weight and become obese. In the case of video games, with the advent of the so called “serious games initiative”, a new breed of video games have come into place. Such games are called “exergames” and they are intended to motivate the user to do physical activity. Although there is some evidence that some types of Exergames are more physically demanding than traditional sedentary games, there is also evidence that suggests that such games are not really providing the intensity of exert that is at the recommended levels for a daily exercise. Currently, most exergames have a passive approach. There is no real tracking of the players progress, there is no assessment of his/her level of exert, no contextual information, and there is no adaptability on the game itself to change the conditions of the game and prompt the desired physiological response on the player. In this thesis we present research work done towards the design and development of an architecture and related systems that support a shift in the exertion game paradigm. The contributions of this work are enablers in the design and development of exertion games with a strict serious game approach. Such games should have “exercising” as the primary goal, and a game engine that has been developed under this scheme should be aware of the exertion context of the player. The game should be aware of the level of exertion of the player and adapt the gaming context (in-game variables and exertion interface settings) so that the player can reach a predefined exertion rate as desired. To support such degree of adaptability in a multimedia, multimodal system, we have proposed a system architecture that lays down the general guidelines for the design and development of such systems.
6

Applications of Crossmodal Relationships in Interfaces for Complex Systems: A Study of Temporal Synchrony

Giang, Wayne Chi Wei January 2011 (has links)
Current multimodal interfaces for complex systems, such as those designed using the Ecological Interface Design (EID) methodology, have largely focused on effective design of interfaces that treat each sensory modality as either an independent channel of information or as a way to provide redundant information. However, there are many times when operationally related information is presented in different sensory modalities. There is very little research that has examined how this information in different modalities can be linked at a perceptual level. When related information is presented through multiple sensory modalities, interface designers will require perceptual methods for linking relevant information together across modalities. This thesis examines one possible crossmodal perceptual relationship, temporal synchrony, and evaluates whether the relationship is useful in the design of multimodal interfaces for complex systems. Two possible metrics for the evaluation of crossmodal perceptual relationships were proposed: resistance to changes in workload, and stream monitoring awareness. Two experiments were used to evaluate these metrics. The results of the first experiment showed that temporal rate synchrony was not resistant to changes in workload, manipulated through a secondary visual task. The results of the second experiment showed that participants who used crossmodal temporal rate synchrony to link information in a multimodal interface did not achieve better performance in the monitoring of the two streams of information being presented over equivalent unimodal interfaces. Taken together, these findings suggest that temporal rate synchrony may not be an effective method for linking information across modalities. Crossmodal perceptual relationships may be very different from intra-modal perceptual relationships. However, methods for linking information across sensory modalities are still an important goal for interface designers, and a key feature of future multimodal interface design for complex systems.
7

Uma abordagem para projeto de aplicações com interação multimodal da Web / An approach to design Web multimodal interfaces

Americo Talarico Neto 14 April 2011 (has links)
O principal objetivo do desenvolvimento de aplicações multimodais é possibilitar uma maneira mais natural dos seres humanos se comunicarem com as máquinas, por meio de interfaces mais eficientes, intuitivas, fáceis de usar e, de certa forma, mais inteligentes. No entanto, a literatura da área mostra que a reutilização, tanto de conhecimento como de código fonte, ainda apresenta problemas, dados a complexidade do código em sistemas multimodais, a falta de mecanismos eficientes de testes de usabilidade e a dificuldade em se gerenciar a captura, o armazenamento e a recuperação de conhecimento de projeto. Nesta tese argumenta-se que a utilização de uma abordagem sistemática, centrada no usuário, apoiada por uma ferramenta computacional e com um modelo bem definido que permita o desenvolvimento de interfaces multimodais com a reutilização de Design Rationale, aumenta e melhora os níveis de usabilidade, promove a identificação e utilização de padrões de projeto e o reúso de componentes. Para demonstrar esta tese, apresenta-se neste texto a abordagem para o desenvolvimento de interfaces multimodais Web, MMWA, e o seu ambiente de autoria, o MMWA-ae, ambos compostos por atividades que auxiliam a equipe de projeto durante as fases de projeto, desenvolvimento e avaliações de usabilidade. São discutidos também os resultados obtidos com a execução de três estudos de caso, realizados no ambiente acadêmico, nos quais se buscou determinar a viabilidade da abordagem e os benefícios que podem ser alcançados com a combinação de diferentes técnicas, a saber: design rationale, padrões de projeto, modelagem de tarefas, componentes de software, princípios de usabilidade, avaliações heurísticas, testes com usuários, regras de associação, entre outras. Os resultados evidenciam que a abordagem e seu ambiente de autoria podem proporcionar diferentes benefícios para organizações que desenvolvem sistemas multimodais, incluindo o aumento da usabilidade e consequentemente da qualidade do produto, bem como a diminuição de custos e da complexidade do desenvolvimento com a reutilização de código e de conhecimento capturado em projetos anteriores / The main goal of developing multimodal applications is to enable a more natural way of communication between human beings and machines through interfaces that are more efficient, intuitive, easier to use and, in a certain way, more intelligent. However, the literature shows that the reuse of both knowledge and source code still presents problems, given the complexity of the code in multimodal systems, the lack of efficient mechanisms to test the usability and the difficulty in managing the capture, the storage and the recovery of design knowledge. In this thesis it is discussed that the use of a systematic approach, usercentered, supported by a computer tool and with a well defined model that allows the development of multimodal interfaces with the reuse of DR, increases and improves the usability levels, promotes the identification and the use of design patterns and the reuse of components. To demonstrate this thesis, it is shown in this text an approach to develop Web multimodal interfaces (MMWA) and its authoring environment (MMWA-ae), both composed of activities that help the design team during the different project phases: design, development and usability evaluation. We also discuss in this thesis the results obtained with the execution of three case studies, executed in the academic environment, which aimed to determine the feasibility of the approach and the benefits that can be achieved with the combination of different techniques, such as: design rationale, design patterns, tasks model, software components, usability principles, heuristic evaluations, user testing, association rules, among others. The results show clearly that the approach and its author environment can provide different benefits to organizations that develop multimodal systems, including the usability improvement and, consequently, the quality of the product, as well as the decrease of costs and complexity since it encompasses the development with reused code and design knowledge captured in previous projects
8

Designing interfaces for the visually impaired : Contextual information and analysis of user needs

Olofsson, Stina January 2018 (has links)
This thesis explores how to design for the visually impaired. During the course of work, a literature study and interviews with blind and visually impaired people were conducted. The objective was to investigate what contextual information is wanted in new and unfamiliar spaces outside their home. The interviews also explored how they experience digital tools they are using today and what they think of the possibilities of voice and other user interfaces. The main finding from the study is that there are indications that multimodal interfaces are preferred. The interface should combine voice, haptic and graphics since the participants wanted to interact in different ways depending on functionality and context. Three main problem areas were identified, navigation, public transportation and shopping. Another result was that when developing for the visually impaired it should always be tested on people with a wide variation of vision loss to find the correct contextual information.
9

An Adaptive Approach to Exergames with Support for Multimodal Interfaces

Silva Salmeron, Juan Manuel January 2013 (has links)
Technology such as television, computers, and video games are often in the line for reasons of why people lack physical activity and tend to gain weight and become obese. In the case of video games, with the advent of the so called “serious games initiative”, a new breed of video games have come into place. Such games are called “exergames” and they are intended to motivate the user to do physical activity. Although there is some evidence that some types of Exergames are more physically demanding than traditional sedentary games, there is also evidence that suggests that such games are not really providing the intensity of exert that is at the recommended levels for a daily exercise. Currently, most exergames have a passive approach. There is no real tracking of the players progress, there is no assessment of his/her level of exert, no contextual information, and there is no adaptability on the game itself to change the conditions of the game and prompt the desired physiological response on the player. In this thesis we present research work done towards the design and development of an architecture and related systems that support a shift in the exertion game paradigm. The contributions of this work are enablers in the design and development of exertion games with a strict serious game approach. Such games should have “exercising” as the primary goal, and a game engine that has been developed under this scheme should be aware of the exertion context of the player. The game should be aware of the level of exertion of the player and adapt the gaming context (in-game variables and exertion interface settings) so that the player can reach a predefined exertion rate as desired. To support such degree of adaptability in a multimedia, multimodal system, we have proposed a system architecture that lays down the general guidelines for the design and development of such systems.
10

Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video

Rovelo Ruiz, Gustavo Alberto 29 July 2015 (has links)
[EN] Human-Computer Interaction is a multidisciplinary research field that combines, amongst others, Computer Science and Psychology. It studies human-computer interfaces from the point of view of both, technology and the user experience. Researchers in this area have now a great opportunity, mostly because the technology required to develop 3D user interfaces for computer applications (e.g. visualization, tracking or portable devices) is now more affordable than a few years ago. Augmented Reality and Omni-Directional Video are two promising examples of this type of interfaces where the user is able to interact with the application in the three-dimensional space beyond the 2D screen. The work described in this thesis is focused on the evaluation of interaction aspects in both types of applications. The main goal is contributing to increase the knowledge about this new type of interfaces to improve their design. We evaluate how computer interfaces can convey information to the user in Augmented Reality applications exploiting human multisensory capabilities. Furthermore, we evaluate how the user can give commands to the system using more than one type of input modality, studying Omnidirectional Video gesture-based interaction. We describe the experiments we performed, outline the results for each particular scenario and discuss the general implications of our findings. / [ES] El campo de la Interacción Persona-Computadora es un área multidisciplinaria que combina, entre otras a las Ciencias de la Computación y Psicología. Estudia la interacción entre los sistemas computacionales y las personas considerando tanto el desarrollo tecnológico, como la experiencia del usuario. Los dispositivos necesarios para crear interfaces de usuario 3D son ahora más asequibles que nunca (v.gr. dispositivos de visualización, de seguimiento o móviles) abriendo así un área de oportunidad para los investigadores de esta disciplina. La Realidad Aumentada y el Video Omnidireccional son dos ejemplos de este tipo de interfaces en donde el usuario es capaz de interactuar en el espacio tridimensional más allá de la pantalla de la computadora. El trabajo presentado en esta tesis se centra en la evaluación de la interacción del usuario con estos dos tipos de aplicaciones. El objetivo principal es contribuir a incrementar la base de conocimiento sobre este tipo de interfaces y así, mejorar su diseño. En este trabajo investigamos de qué manera se pueden emplear de forma eficiente las interfaces multimodales para proporcionar información relevante en aplicaciones de Realidad Aumentada. Además, evaluamos de qué forma el usuario puede usar interfaces 3D usando más de un tipo de interacción; para ello evaluamos la interacción basada en gestos para Video Omnidireccional. A lo largo de este documento se describen los experimentos realizados y los resultados obtenidos para cada caso en particular. Se presenta además una discusión general de los resultados. / [CAT] El camp de la Interacció Persona-Ordinador és una àrea d'investigació multidisciplinar que combina, entre d'altres, les Ciències de la Informàtica i de la Psicologia. Estudia la interacció entre els sistemes computacionals i les persones considerant tant el desenvolupament tecnològic, com l'experiència de l'usuari. Els dispositius necessaris per a crear interfícies d'usuari 3D són ara més assequibles que mai (v.gr. dispositius de visualització, de seguiment o mòbils) obrint així una àrea d'oportunitat per als investigadors d'aquesta disciplina. La Realitat Augmentada i el Vídeo Omnidireccional són dos exemples d'aquest tipus d'interfícies on l'usuari és capaç d'interactuar en l'espai tridimensional més enllà de la pantalla de l'ordinador. El treball presentat en aquesta tesi se centra en l'avaluació de la interacció de l'usuari amb aquests dos tipus d'aplicacions. L'objectiu principal és contribuir a augmentar el coneixement sobre aquest nou tipus d'interfícies i així, millorar el seu disseny. En aquest treball investiguem de quina manera es poden utilitzar de forma eficient les interfícies multimodals per a proporcionar informació rellevant en aplicacions de Realitat Augmentada. A més, avaluem com l'usuari pot utilitzar interfícies 3D utilitzant més d'un tipus d'interacció; per aquesta raó, avaluem la interacció basada en gest per a Vídeo Omnidireccional. Al llarg d'aquest document es descriuen els experiments realitzats i els resultats obtinguts per a cada cas particular. A més a més, es presenta una discussió general dels resultats. / Rovelo Ruiz, GA. (2015). Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53916 / TESIS

Page generated in 0.09 seconds