• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Understanding and Exploiting Spatial Memory in the Design of Efficient Command Selection Interfaces

Scarr, Joseph Laurence January 2014 (has links)
Humans have a strong natural ability to remember item locations. In graphical user interfaces, this ability is one of the primary mechanisms by which users become efficient. However, there are two ways in which modern applications often fail to exploit the potential of spatial memory. First, they overuse hierarchical structures such as cascading menus, which slows down interaction for expert users who already know item locations; and second, they move items around, most commonly in response to changing display geometry. The three goals of this thesis are therefore to (1) develop a better understanding of human spatial memory in the context of user interfaces; (2) design and validate efficient command-selection interfaces based on the strength of spatial memory; and (3) design and validate interface strategies that allow users to maintain spatial memory even when display geometry changes. Addressing goal (1), a comprehensive literature review of spatial memory for user interfaces is presented. The review covers underlying psychological models of spatial memory, the observable properties of spatial memory, and existing applications of spatial memory to human-computer interaction. In addition to informing the research in this thesis, the review is intended to provide a useful summary of the state of spatial memory research for scientists in HCI, as well as providing a set of design guidelines on spatial memory for practitioners. Addressing goal (2), this thesis presents the design and evaluation of two related user interface techniques, CommandMaps and StencilMaps. The CommandMap is a spatially stable interface with a flattened hierarchy, intended as a replacement for cascading menu systems. Theoretical performance predictions indicate that CommandMaps should be significantly faster than traditional user interfaces such as menus and the Microsoft Office Ribbon, and laboratory-based empirical studies of command selection confirm these predictions. These positive results motivated the design and implementation of two real-world CommandMap user interfaces based on Microsoft Word and Pinta (an open-source image editing application). Evaluation results confirmed that CommandMaps continue to demonstrate performance and subjective advantages in the context of actual tasks, including interleaved command selection, typing, and direct manipulation. Qualitative data gathered from interviews, questionnaires, and conversations provide substantial insight into users' reactions to CommandMaps, leading to a set of design recommendations regarding when and how they should be implemented in real applications. One design limitation identified during CommandMap evaluations was that novice users could be initially overwhelmed by the number of controls displayed at once. To address this concern, an extension to the CommandMap, called a StencilMap, was designed and evaluated. By using a stencil overlay to de-emphasise more advanced controls, the StencilMap directs users' visual search to a subset of controls they are most likely to need. Then, when novice users progress to the full interface, they can utilise their existing knowledge of command locations. An initial study shows that stencils are more effective at guiding visual search than ephemeral adaptation, another subset emphasis technique; however, users' spatial learning decreases as the amount of guidance increases. A second study compared StencilMaps to a palette-based subset interface, which displays the most likely commands in a ready-to-hand tool panel. Results show that StencilMaps enable stronger learning of the full UI compared to the palette approach. Addressing goal (3), this thesis presents an investigation of how interfaces can be adapted to changing interface constraints while still supporting the user's memory for item locations. A human factors study on spatially consistent transformations was conducted, with results showing that people's spatial memory is only minimally disrupted by geometric transformations (such as scaling, translation, or perspective distortion), as long as the set of items in a display is transformed as a whole. This idea is then applied to a file browser layout: by scaling the item grid when the parent window is resized, rather than reflowing items, memory for item locations can be maintained. A second study validates this idea, showing that a scaling interface outperforms both reflow and scrolling-based techniques for revisitation when windows are resized. In summary, the contributions of this thesis are: (1) an in-depth literature review of spatial memory in psychology and HCI, which is intended to inform designers and future researchers as well as the material in this thesis; (2) the design, implementation and evaluation of a new interface, the CommandMap, which shows that spatial stability and hierarchy flattening enable a high ceiling of expert performance; (3) the design of a stencil overlay technique to help novice users find commands, and an evaluation highlighting the key trade-off between helping users and allowing them to learn; and (4) empirical evidence showing that most types of whole-interface transformations have a small effect on spatial memory, and that correspondingly, scaling interfaces outperform reflowing interfaces under changing window constraints.
2

Increasing the expressive power of gesture-based interaction on mobile devices / Augmenter le pouvoir d'expression de l'interaction gestuelle sur les appareils mobiles

Alvina, Jessalyn 13 December 2017 (has links)
Les interfaces mobiles actuelles permettent aux utilisateurs de manipuler directement les objets affichés à l’écran avec de simples gestes, par exemple cliquer sur des boutons ou des menus ou pincer pour zoomer. Pour accéder à un espace de commande plus large, les utilisateurs sont souvent forcés de passer par de nombreuses étapes, rendant l’interaction inefficace et laborieuse. Des gestes plus complexes sont un moyen puissant d’accéder rapidement à l’information ainsi que d’exécuter des commandes plus efficacement [5]. Ils sont en revanche plus difficiles à apprendre et à contrôler. Le “Gesture Typing” (saisie de texte par geste) est une alternative intéressante au texte tapé: il permet aux utilisateurs de dessiner un geste sur leur clavier virtuel pour entrer du texte, depuis la première jusqu’à la dernière lettre d’un mot. Dans cette thèse, j’augmente le pouvoir d’expression de l’interaction mobile en tirant profit de la forme et la dynamique du geste et de l’espace d'écran, pour invoquer des commandes ainsi que pour faciliter l’appropriation dans différents contextes d’usage. Je conçois "Expressive Keyboard", qui transforme la variation du geste en un résultat riche et je démontre plusieurs applications dans le contexte de la communication textuelle. Et plus, je propose "CommandBoard", un clavier gestuel qui permet aux utilisateurs de sélectionner efficacement des commandes parmi un vaste choix tout en supportant la transition entre les novices et les experts. Je démontre plusieurs applications de "CommandBoard", dont chacune offre aux utilisateurs un choix basé sur leurs compétences cognitives et moteur, ainsi que différentes tailles et organisations de l’ensemble des commandes. Ensemble, ces techniques donnent un plus grand pouvoir expressif aux utilisateurs en tirant profit de leur contrôle moteur et de leur capacité à apprendre, à contrôler et à s’approprier. / Current mobile interfaces let users directly manipulate the objects displayed on the screen with simple stroke gestures, e.g. tap on soft buttons or menus or pinch to zoom. To access a larger command space, the users are often forced to go through long steps, making the interaction cumbersome and inefficient. More complex gestures offer a powerful way to access information quickly and to perform a command more efficiently [5]. However, they are more difficult to learn and control. Gesture typing [78] is an interesting alternative to input text: it lets users draw a gesture on soft keyboards to enter text, from the first until the final letter in a word. In this thesis, I increase the expressive power of mobile interaction by leveraging the gesture’s shape and dynamics and the screen space to produce rich output, to invoke commands, and to facilitate appropriation in different contexts of use. I design "Expressive Keyboard" that transforms the gesture variations into rich output, and demonstrate several applications in the context of textbased communication. As well, I propose "CommandBoard", a gesture keyboard that lets users efficiently select commands from a large command space while supporting the transition from novices to experts. I demonstrate different applications of "CommandBoard", each offers users a choice, based on their cognitive and motor skills, as well as the size and organization of the current command set. Altogether, these techniques give users more expressive power by leveraging human’s motor control and cognitive ability to learn, to control, and to appropriate.

Page generated in 0.0911 seconds