• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 22
  • 13
  • 13
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Context-aware mixed reality: A learning-based framework for semantic-level interaction

Chen, L., Tang, W., John, N.W., Wan, Tao Ruan, Zhang, J.J. 16 December 2019 (has links)
Yes / Mixed reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic‐based interactive MR framework that is beyond current geometry‐based approaches, offering a step change in generating high‐level context‐aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object‐specific behaviours, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real‐world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material‐aware prototype system for context‐aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real‐time semantic‐level interactions.
22

The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot Perception

Liu, Xiaozhen 30 June 2023 (has links)
As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound. A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions. / Master of Science / As robots increasingly appear in people's lives as functional assistants or for entertainment, there are more and more scenarios in which people interact with robots. More research on human-robot interaction is being proposed to help develop more natural ways of interaction. Our study focuses on the effects of emotions conveyed by a humanoid robot's non-speech sounds on people's perception about the robot and its emotions. The results of our experiments show that the accuracy of emotion recognition of regular voices is significantly higher than that of music and robot-like voices and elicits higher trust, naturalness, and preference. The gender of the robot's voice or the gender of the participant did not affect the accuracy of emotion recognition. People are now not inclined to traditional stereotypes of robotic voices (e.g., like old movies), and expressing emotions with music and gestures mostly shows a lower perception. Happiness and sadness were identified with the highest accuracy among the emotions we studied. Participants felt more discomfort and human-likeness in the male voices than in female voices. Male participants were more likely to feel uncomfortable when interacting with the humanoid robot, while female participants were more likely to feel warm. Our study discusses design guidelines and future research directions for emotional cues in social robots.
23

Eye-gaze interaction techniques for use in online games and environments for users with severe physical disabilities

Vickers, Stephen January 2011 (has links)
Multi-User Virtual Environments (MUVEs) and Massively Multi-player On- line Games (MMOGs) are a popular, immersive genre of computer game. For some disabled users, eye-gaze offers the only input modality with the potential for sufficiently high bandwidth to support the range of time-critical interaction tasks required to play. Although, there has been much research into gaze interaction techniques for computer interaction over the past twenty years, much of this has focused on 2D desktop application control. There has been some work that investigates the use of gaze interaction as an additional input device for gaming but very little on using gaze on its own. Further, configuration of these techniques usually requires expert knowledge often beyond the capabilities of a parent, carer or support worker. The work presented in this thesis addresses these issues by the investigation of novel gaze-only interaction techniques. These are to enable at least a beginner level of game play to take place together with a means of adapting the techniques to suit an individual. To achieve this, a collection of novel gaze based interaction techniques have been evaluated through empirical studies. These have been encompassed within an extensible software architecture that has been made available for free download. Further, a metric of reliability is developed that when used as a measure within a specially designed diagnostic test, allows the interaction technique to be adapted to suit an individual. Methods of selecting interaction techniques based upon game task are also explored and a novel methodology based on expert task analysis is developed to aid selection.
24

Gestural interaction techniques for handheld devices combining accelerometers and multipoint touch screens / Techniques d'interaction gestuelles pour dispositifs mobiles combinant accéléromètres et écrans tactiles multipoints

Scoditti, Adriano 28 September 2011 (has links)
Dans cette thèse, j'aborde la question de l'interaction gestuelle sur dispositif mobile. Ces dispositifs, à présent communs, se distinguent des ordinateurs conventionnels principalement par leurs périphériques d'interaction avec l'utilisateur (écrans de taille restreinte mais tactiles, capteurs divers tels que les accéléromètres) ainsi que par le contexte dans lequel ils sont utilisés. Le travail que je présente est une exploration du vaste domaine des techniques d'interaction sur ces dispositifs mobiles. Je structure cet espace en me concentrant sur les techniques à base d'accéléromètres pour lesquelles je propose une taxonomie. Son pouvoir descriptif et discriminant est validé par la classification de trente-sept techniques d'interaction de la littérature. La suite de mon travail se penche sur la réalisation de techniques d'interaction gestuelles pour ces dispositifs mobiles. Avec TouchOver, je montre qu'il est possible de tirer parti de manière complémentaire de deux canaux d'entrée (écran tactile et accéléromètres) pour ajouter un état au glissé du doigt, permettant ainsi d'enrichir cette interaction. Enfin, je m'intéresse aux menus sur dispositif mobile et propose une nouvelle forme de menus gestuels. Je présente leur réalisation avec la bibliothèque logicielle GeLATI qui permet leur intégration à une boîte à outils de développement d'interface graphique préexistante. * / In this thesis, we address the question of gesture interaction on mobile device. These devices, now common, differ from conventional computers primarily by the input devices the user interact with (screen size small but tactile, various sensors such as accelerometers) as well as the context in which they are used. The work presented here is an exploration of the vast area of interaction techniques on these mobile devices. First we try to structure this space by focusing on the techniques based on accelerometers for which we propose a taxonomy. Its descriptive and discriminant power is validated by and the classification of thirty-seven interaction techniques in the literature. Second we focus on the achievement of gestural interaction techniques for these mobile devices. With TouchOver, we show that it is possible to take advantage of complementary two-channels input (touch screen and accelerometers) to add a state to the finger-drag, thus enriching the interaction. Finally, we focus on mobile device menus and offer a new form of sign language menus. We discuss their implementation with the GeLATI software library that allows their integration into a pre-existing GUI toolkit.
25

Architecture and Applications of a Geovisual Analytics Framework

Ho, Quan January 2013 (has links)
The large and ever-increasing amounts of multi-dimensional, multivariate, multi-source, spatio-temporal data represent a major challenge for the future. The need to analyse and make decisions based on these data streams, often in time-critical situations, demands integrated, automatic and sophisticated interactive tools that aid the user to manage, process, visualize and interact with large data spaces. The rise of `Web 2.0', which is undisputedly linked with developments such as blogs, wikis and social networking, and the internet usage explosion in the last decade represent another challenge for adapting these tools to the Internet to reach a broader user community. In this context, the research presented in this thesis introduces an effective web-enabled geovisual analytics framework implemented, applied and verified in Adobe Flash ActionScript and HTML5/JavaScript. It has been developed based on the principles behind Visual Analytics and designed to significantly reduce the time and effort needed to develop customized web-enabled applications for geovisual analytics tasks and to bring the benefits of visual analytics to the public. The framework has been developed based on a component architecture and includes a wide range of visualization techniques enhanced with various interaction techniques and interactive features to support better data exploration and analysis. The importance of multiple coordinated and linked views is emphasized and a number of effective techniques for linking views are introduced. Research has so far focused more on tools that explore and present data while tools that support capturing and sharing gained insight have not received the same attention. Therefore, this is one of the focuses of the research presented in this thesis. A snapshot technique is introduced, which supports capturing discoveries made during the exploratory data analysis process and can be used for sharing gained knowledge. The thesis also presents a number of applications developed to verify the usability and the overall performance of the framework for the visualization, exploration and analysis of data in different domains. Four application scenarios are presented introducing (1) the synergies among information visualization methods, geovisualization methods and volume data visualization methods for the exploration and correlation of spatio-temporal ocean data, (2) effective techniques for the visualization, exploration and analysis of self-organizing network data, (3) effective flow visualization techniques applied to the analysis of time-varying spatial interaction data such as migration data, commuting data and trade flow data, and (4) effective techniques for the visualization, exploration and analysis of flood data.
26

Direct Pen Input and Hand Occlusion

Vogel, Daniel 01 September 2010 (has links)
We investigate, model, and design interaction techniques for hand occlusion with direct pen input. Our focus on occlusion follows from a qualitative and quantitative study of direct pen usability with a conventional graphical user interface (GUI). This study reveals overarching problems relating to poor precision, ergonomics, cognitive differences, limited input, and problems resulting from occlusion. To investigate occlusion more closely, we conduct three formal experiments to examine its area and shape, its affect on performance, and compensatory postures. We find that the shape of the occluded area varies across participants with some common characteristics. Our results provide evidence that occlusion affects target selection performance: especially for continuous tasks or when the goal is initially hidden. We observe how users contort their wrist posture during a simultaneous monitoring task, and show this can increase task time. Based on these investigations, we develop a five parameter geometric model to represent the shape of the occluded area and extend this to a user configurable, real-time version. To evaluate our model, we introduce a novel analytic testing methodology using optimization for geometric fitting and precision-recall statistics for comparison; as well as conducting a user study. To address problems with occlusion, we introduce the notion of occlusion-aware interfaces: techniques which can use our configurable model to track currently occluded regions and then counteract potential problems and/or utilize the occluded area. As a case study, we present the Occlusion-Aware Viewer: an interaction technique which displays otherwise missed previews and status messages in a non-occluded area. Within this thesis we also present a number of methodology contributions for quantitative and qualitative study design, multi-faceted study logging using synchronized video, qualitative analysis, image-based analysis, task visualization, optimization-based analytical testing, and user interface image processing.
27

Direct Pen Input and Hand Occlusion

Vogel, Daniel 01 September 2010 (has links)
We investigate, model, and design interaction techniques for hand occlusion with direct pen input. Our focus on occlusion follows from a qualitative and quantitative study of direct pen usability with a conventional graphical user interface (GUI). This study reveals overarching problems relating to poor precision, ergonomics, cognitive differences, limited input, and problems resulting from occlusion. To investigate occlusion more closely, we conduct three formal experiments to examine its area and shape, its affect on performance, and compensatory postures. We find that the shape of the occluded area varies across participants with some common characteristics. Our results provide evidence that occlusion affects target selection performance: especially for continuous tasks or when the goal is initially hidden. We observe how users contort their wrist posture during a simultaneous monitoring task, and show this can increase task time. Based on these investigations, we develop a five parameter geometric model to represent the shape of the occluded area and extend this to a user configurable, real-time version. To evaluate our model, we introduce a novel analytic testing methodology using optimization for geometric fitting and precision-recall statistics for comparison; as well as conducting a user study. To address problems with occlusion, we introduce the notion of occlusion-aware interfaces: techniques which can use our configurable model to track currently occluded regions and then counteract potential problems and/or utilize the occluded area. As a case study, we present the Occlusion-Aware Viewer: an interaction technique which displays otherwise missed previews and status messages in a non-occluded area. Within this thesis we also present a number of methodology contributions for quantitative and qualitative study design, multi-faceted study logging using synchronized video, qualitative analysis, image-based analysis, task visualization, optimization-based analytical testing, and user interface image processing.
28

A Symmetric Interaction Model for Bimanual Input

Latulipe, Celine January 2006 (has links)
People use both their hands together cooperatively in many everyday activities. The modern computer interface fails to take advantage of this basic human ability, with the exception of the keyboard. However, the keyboard is limited in that it does not afford continuous spatial input. The computer mouse is perfectly suited for the point and click tasks that are the major method of manipulation within graphical user interfaces, but standard computers have a single mouse. A single mouse does not afford spatial coordination between the two hands within the graphical user interface. Although the advent of the Universal Serial Bus has made it possible to easily plug in many peripheral devices, including a second mouse, modern operating systems work on the assumption of a single spatial input stream. Thus, if a second mouse is plugged into a Macintosh computer, a Windows computer or a UNIX computer, the two mice control the same cursor. <br /><br /> Previous work in two-handed or bimanual interaction techniques has often followed the asymmetric interaction guidelines set out by Yves Guiard's Kinematic Chain Model. In asymmetric interaction, the hands are assigned different tasks, based on hand dominance. I show that there is an interesting class of desktop user interface tasks which can be classified as symmetric. A symmetric task is one in which the two hands contribute equally to the completion of a unified task. I show that dual-mouse symmetric interaction techniques outperform traditional single-mouse techniques as well as dual-mouse asymmetric techniques for these symmetric tasks. I also show that users prefer the symmetric interaction techniques for these naturally symmetric tasks.
29

A Symmetric Interaction Model for Bimanual Input

Latulipe, Celine January 2006 (has links)
People use both their hands together cooperatively in many everyday activities. The modern computer interface fails to take advantage of this basic human ability, with the exception of the keyboard. However, the keyboard is limited in that it does not afford continuous spatial input. The computer mouse is perfectly suited for the point and click tasks that are the major method of manipulation within graphical user interfaces, but standard computers have a single mouse. A single mouse does not afford spatial coordination between the two hands within the graphical user interface. Although the advent of the Universal Serial Bus has made it possible to easily plug in many peripheral devices, including a second mouse, modern operating systems work on the assumption of a single spatial input stream. Thus, if a second mouse is plugged into a Macintosh computer, a Windows computer or a UNIX computer, the two mice control the same cursor. <br /><br /> Previous work in two-handed or bimanual interaction techniques has often followed the asymmetric interaction guidelines set out by Yves Guiard's Kinematic Chain Model. In asymmetric interaction, the hands are assigned different tasks, based on hand dominance. I show that there is an interesting class of desktop user interface tasks which can be classified as symmetric. A symmetric task is one in which the two hands contribute equally to the completion of a unified task. I show that dual-mouse symmetric interaction techniques outperform traditional single-mouse techniques as well as dual-mouse asymmetric techniques for these symmetric tasks. I also show that users prefer the symmetric interaction techniques for these naturally symmetric tasks.
30

Recorte volumétrico usando técnicas de interação 2D e 3D / Volume Sculpting with 2D and 3D Interaction Techniques

Huff, Rafael January 2006 (has links)
A visualização de conjuntos de dados volumétricos é comum em diversas áreas de aplicação e há já alguns anos os diversos aspectos envolvidos nessas técnicas vêm sendo pesquisados. No entanto, apesar dos avanços das técnicas de visualização de volumes, a interação com grandes volumes de dados ainda apresenta desafios devido a questões de percepção (ou isolamento) de estruturas internas e desempenho computacional. O suporte do hardware gráfico para visualização baseada em texturas permite o desenvolvimento de técnicas eficientes de rendering que podem ser combinadas com ferramentas de recorte interativas para possibilitar a inspeção de conjuntos de dados tridimensionais. Muitos estudos abordam a otimização do desempenho de ferramentas de recorte, mas muito poucos tratam das metáforas de interação utilizadas por essas ferramentas. O objetivo deste trabalho é desenvolver ferramentas interativas, intuitivas e fáceis de usar para o recorte de imagens volumétricas. Inicialmente, é apresentado um estudo sobre as principais técnicas de visualização direta de volumes e como é feita a exploração desses volumes utilizando-se recorte volumétrico. Nesse estudo é identificada a solução que melhor se enquadra no presente trabalho para garantir a interatividade necessária. Após, são apresentadas diversas técnicas de interação existentes, suas metáforas e taxonomias, para determinar as possíveis técnicas de interação mais fáceis de serem utilizadas por ferramentas de recorte. A partir desse embasamento, este trabalho apresenta o desenvolvimento de três ferramentas de recorte genéricas implementadas usando-se duas metáforas de interação distintas que são freqüentemente utilizadas por usuários de aplicativos 3D: apontador virtual e mão virtual. A taxa de interação dessas ferramentas é obtida através de programas de fragmentos especiais executados diretamente no hardware gráfico. Estes programas especificam regiões dentro do volume a serem descartadas durante o rendering, com base em predicados geométricos. Primeiramente, o desempenho, precisão e preferência (por parte dos usuários) das ferramentas de recorte volumétrico são avaliados para comparar as metáforas de interação empregadas. Após, é avaliada a interação utilizando-se diferentes dispositivos de entrada para a manipulação do volume e ferramentas. A utilização das duas mãos ao mesmo tempo para essa manipulação também é testada. Os resultados destes experimentos de avaliação são apresentados e discutidos. / Visualization of volumetric datasets is common in many fields and has been an active area of research in the past two decades. In spite of developments in volume visualization techniques, interacting with large datasets still demands research efforts due to perceptual and performance issues. The support of graphics hardware for texture-based visualization allows efficient implementation of rendering techniques that can be combined with interactive sculpting tools to enable interactive inspection of 3D datasets. Many studies regarding performance optimization of sculpting tools have been reported, but very few are concerned with the interaction techniques employed. The purpose of this work is the development of interactive, intuitive, and easy-to-use sculpting tools. Initially, a review of the main techniques for direct volume visualization and sculpting is presented. The best solution that guarantees the required interaction is highlighted. Afterwards, in order to identify the most user-friendly interaction technique for volume sculpting, several interaction techniques, metaphors and taxonomies are presented. Based on that, this work presents the development of three generic sculpting tools implemented using two different interaction metaphors, which are often used by users of 3D applications: virtual pointer and virtual hand. Interactive rates for these sculpting tools are obtained by running special fragment programs on the graphics hardware which specify regions within the volume to be discarded from rendering based on geometric predicates. After development, the performance, precision and user preference of the sculpting tools were evaluated to compare the interaction metaphors. Afterward, the tools were evaluated by comparing the use of a 3D mouse against a conventional wheel mouse for guiding volume and tools manipulation. Two-handed input was also tested with both types of mouse. The results from the evaluation experiments are presented and discussed.

Page generated in 0.142 seconds