• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 22
  • 13
  • 13
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Gestural interaction techniques for handheld devices combining accelerometers and multipoint touch screens

Scoditti, Adriano 28 September 2011 (has links) (PDF)
In this thesis, we address the question of gesture interaction on mobile device. These devices, now common, differ from conventional computers primarily by the input devices the user interact with (screen size small but tactile, various sensors such as accelerometers) as well as the context in which they are used. The work presented here is an exploration of the vast area of interaction techniques on these mobile devices. First we try to structure this space by focusing on the techniques based on accelerometers for which we propose a taxonomy. Its descriptive and discriminant power is validated by and the classification of thirty-seven interaction techniques in the literature. Second we focus on the achievement of gestural interaction techniques for these mobile devices. With TouchOver, we show that it is possible to take advantage of complementary two-channels input (touch screen and accelerometers) to add a state to the finger-drag, thus enriching the interaction. Finally, we focus on mobile device menus and offer a new form of sign language menus. We discuss their implementation with the GeLATI software library that allows their integration into a pre-existing GUI toolkit.
12

Evaluating Swiftpoint as a Mobile Device for Direct Manipulation Input

Amer, Taher January 2006 (has links)
Swiftpoint is a promising new computer pointing device that is designed primarily for mobile computer users in constrained space. Swiftpoint has many advantages over current pointing devices: it is small, ergonomic, has a digital ink mode, and can be used over a flat keyboard. This thesis aids the development of Swiftpoint by formally evaluating it against two of the most common pointing devices with today's mobile computers: the touchpad, and mouse. Two laws commonly used with pointing devices evaluations, Fitts' Law and the Steering Law, were used to evaluate Swiftpoint. Results showed that Swiftpoint was faster and more accurate than the touchpad. The performance of the mouse was however, superior to both the touchpad and Swiftpoint. Experimental results were reflected in participants' choice for the mouse as their preferred pointing device. However, some participants indicated that their choice was based on their familiarity with the mouse. None of the participants chose the touchpad as their preferred device.
13

LiquidText: supporting active reading through flexible document representations

Tashman, Craig Stuart 03 April 2012 (has links)
Knowledge workers are frequently called upon to perform deep, critical reading involving a heightened level of interaction with the reading media and other tools. This process, known as active reading, entails highlighting, commenting upon, and flipping through a text, in addition to other actions. While paper is traditionally seen as the ideal medium for active reading, computers have recently become comparable to paper through replicating the latter’s affordances. But even paper is not a panacea; it offers an inflexible document representation that supports some things well, such as embellishment, but supports others very poorly, like comparison and large scale annotation. In response to this, I developed a prototype system, called LiquidText, to embody a flexible, high degree-of-freedom visual representation that seeks to alleviate some of the problems in paper and paper-like representations. To provide efficient control of this representation, LiquidText runs on a multi-finger touch and gesture based platform. To guide the development of this system, I conducted a formative study of current active reading practice. I investigated knowledge workers’ active reading habits, perceptions, and the problems they face with current reading media. I also inquired into what they would like in a future active reading environment. I used these results in conjunction with multiple design iterations and formative system evaluations to refine LiquidText for use in a summative study. The summative study assessed, through a controlled, laboratory evaluation, LiquidText’s impact on 1) the subjective experience of active reading, 2) the process of active reading, and 3) the outputs resulting from active reading. Generally, the study found a strong participant preference for LiquidText, and a focus on the creation of a summary of the original document as part of the reading process. On average, reading outputs were not significantly better or worse with LiquidText, but some conditions were observed that may help identify the subset of people for whom LiquidText will result in an improvement.
14

Improving command selection in smart environments by exploiting spatial constancy

2015 November 1900 (has links)
With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user.
15

A technique for interactive shape deformation on non-structured objects / Uma técnica para deformação interativa de objetos não estruturados

Blanco, Fausto Richetti January 2007 (has links)
Este trabalho apresenta uma técnica para deformação interativa de objetos 3D não estruturados que combina o uso de sketches em 2D e manipulação interativa de curvas. Através de sketches no plano de imagem, o usuário cria curvas paramétricas a serem usadas como manipulares para modificar a malha do objeto. Um conjunto de linhas desenhadas sobre a projeção do modelo pode ser combinado para criar um esqueleto composto de curvas paramétricas, as quais podem ser interativamente manipuladas, deformando assim a superfície associada a elas. Deformações livres são feitas movendo-se interativamente os pontos de controle das curvas. Alguns outros efeitos interessantes, como torção e escalamento, são obtidos operando-se diretamente sobre o campo de sistemas de coordenadas criado ao longo da curva. Um algoritmo para evitar inter-penetrações na malha durante uma sessão de modelagem com a técnica proposta também é apresentado. Esse algoritmo é executado a taxas interativas assim como toda a técnica apresentada neste trabalho. A técnica proposta lida naturalmente com translações e grandes rotações, assim como superfícies não orientáveis, não variedades e malhas compostas de múltiplos componentes. Em todos os casos, a deformação preserva os detalhes locais consistentemente. O uso de curvas esqueleto permite implementar a técnica utilizando uma interface bem intuitiva, e provê ao usuário um controle preciso sobre a deformação. Restrições sobre o esqueleto e deformações sem inter-penetrações são facilmente conseguidos. É demonstrada grande qualidade em torções e dobras nas malhas e os resultados mostram que a técnica apresentada é consideravelmente mais rápida que as abordagens anteriores, obtendo resultados similares. Dado seu relativo baixo custo computacional, esta abordagem pode lidar com malhas compostas por centenas de milhares de vértices a taxas interativas. / This work presents a technique for interactive shape deformation of unstructured 3D models, based on 2D sketches and interactive curve manipulation in 3D. A set of lines sketched on the image plane over the projection of the model can be combined to create a skeleton composed by parametric curves, which can be interactively manipulated, thus deforming the associated surfaces. Free-form deformations are performed by interactively moving around the curves’ control points. Some other interesting effects, such as twisting and scaling, are obtained by operating directly over a frame field defined on the curve. An algorithm for mesh local self-intersection avoidance during model deformation is also presented. This algorithm is executed at interactive rates as is the whole technique presented in this work. The presented technique naturally handles both translations and large rotations, as well as non-orientable and non-manifold surfaces, and meshes comprised of multiple components. In all cases, the deformation preserves local features. The use of skeleton curves allows the technique to be implemented using a very intuitive interface, and giving the user fine control over the deformation. Skeleton constraints and local self-intersection avoidance are easily achieved. High-quality results on twisting and bending meshes are also demonstrated, and the results show that the presented technique is considerably faster than previous approaches for achieving similar results. Given its relatively low computational cost, this approach can handle meshes composed by hundreds of thousand vertices at interactive rates.
16

A technique for interactive shape deformation on non-structured objects / Uma técnica para deformação interativa de objetos não estruturados

Blanco, Fausto Richetti January 2007 (has links)
Este trabalho apresenta uma técnica para deformação interativa de objetos 3D não estruturados que combina o uso de sketches em 2D e manipulação interativa de curvas. Através de sketches no plano de imagem, o usuário cria curvas paramétricas a serem usadas como manipulares para modificar a malha do objeto. Um conjunto de linhas desenhadas sobre a projeção do modelo pode ser combinado para criar um esqueleto composto de curvas paramétricas, as quais podem ser interativamente manipuladas, deformando assim a superfície associada a elas. Deformações livres são feitas movendo-se interativamente os pontos de controle das curvas. Alguns outros efeitos interessantes, como torção e escalamento, são obtidos operando-se diretamente sobre o campo de sistemas de coordenadas criado ao longo da curva. Um algoritmo para evitar inter-penetrações na malha durante uma sessão de modelagem com a técnica proposta também é apresentado. Esse algoritmo é executado a taxas interativas assim como toda a técnica apresentada neste trabalho. A técnica proposta lida naturalmente com translações e grandes rotações, assim como superfícies não orientáveis, não variedades e malhas compostas de múltiplos componentes. Em todos os casos, a deformação preserva os detalhes locais consistentemente. O uso de curvas esqueleto permite implementar a técnica utilizando uma interface bem intuitiva, e provê ao usuário um controle preciso sobre a deformação. Restrições sobre o esqueleto e deformações sem inter-penetrações são facilmente conseguidos. É demonstrada grande qualidade em torções e dobras nas malhas e os resultados mostram que a técnica apresentada é consideravelmente mais rápida que as abordagens anteriores, obtendo resultados similares. Dado seu relativo baixo custo computacional, esta abordagem pode lidar com malhas compostas por centenas de milhares de vértices a taxas interativas. / This work presents a technique for interactive shape deformation of unstructured 3D models, based on 2D sketches and interactive curve manipulation in 3D. A set of lines sketched on the image plane over the projection of the model can be combined to create a skeleton composed by parametric curves, which can be interactively manipulated, thus deforming the associated surfaces. Free-form deformations are performed by interactively moving around the curves’ control points. Some other interesting effects, such as twisting and scaling, are obtained by operating directly over a frame field defined on the curve. An algorithm for mesh local self-intersection avoidance during model deformation is also presented. This algorithm is executed at interactive rates as is the whole technique presented in this work. The presented technique naturally handles both translations and large rotations, as well as non-orientable and non-manifold surfaces, and meshes comprised of multiple components. In all cases, the deformation preserves local features. The use of skeleton curves allows the technique to be implemented using a very intuitive interface, and giving the user fine control over the deformation. Skeleton constraints and local self-intersection avoidance are easily achieved. High-quality results on twisting and bending meshes are also demonstrated, and the results show that the presented technique is considerably faster than previous approaches for achieving similar results. Given its relatively low computational cost, this approach can handle meshes composed by hundreds of thousand vertices at interactive rates.
17

Towards replacing the remote control with commodity smart-phones through evaluation of interaction techniques enabling television service navigation

Forsling Parborg, Emma January 2017 (has links)
The aim for this project was to develop an application that would be compatible with set-top boxes, or other browser based applications, and re-search what interaction techniques that could be considered a viable substitute for a the traditional remote controller without requiring the visual attention of the viewer User test was also performed in the interest of broadly evaluating the different interaction techniques used in the application, and how the UI itself, including non visual feedback from both the sender and receiver side is perceived.
18

Reaching out to grasp in Virtual Reality : A qualitative usability evaluation of interaction techniques for selection and manipulation in a VR game / Sträck ut och ta tag i virtuell verklighet : En kvalitativ användarbarhetsstudie av interaktionstekniker för val och manipuliering i ett VR spel

Eriksson, Mikael January 2016 (has links)
A new wave of VR head mounted displays is being developed and released to the commercial market including examples such as the HTC Vive, Oculus Rift and Playstation VR. Bundled together with these head mounted displays are a new generation of hand motion controllers which allows the users to reach out and grasp in the virtual environment. Earlier research has explored a range of possible interaction techniques for immersive VR interaction, mainly focusing on the quantitative and objective performance of each technique. Yet even with this research picking the right technique for a given scenario remains a challenging task. This study tries to complement earlier research by instead investigating the qualitative and more subjective aspects of usability, along with making use of the upcoming commercial VR hand controllers. The purpose was to provide guidelines to help future immersive VR interaction designers and researchers. Two interaction techniques (classic Go-Go and ray casting with a reel) were chosen to represent the two most commonly used interaction metaphors for selection and manipulation, i.e. grabbing and pointing. Eleven users were then recruited to try the two interaction techniques inside a shopping scene initially part of a commercial VR game. Each user had to complete five tasks for each technique while “thinking aloud”, followed by an interview after the test. The sessions were recorded and analysed based on five usability factors. The results indicated a strong preferences for the Go-Go interaction technique, with arguments based on how natural its interaction was. These results confirmed several conclusions drawn in earlier research about interaction in immersive VR, including the strength of natural interaction in scenarios which has the capacity to reach a high grade of naturalism, as well as the importance of showing the user when the interaction technique differs from realistic behaviour. Last but not least the results also pointed to the importance of further study on immersive VR interaction techniques over long time use and when combined with user interfaces. / En ny våg av VR-hjälmar håller på att utvecklas för den kommersiella marknaden med exempel såsom HTC Vive, Oculus Rift och Playstation VR. Dessa VR-hjälmar kommer tillsammans med en ny generation av rörelsekänsliga handkontroller som tillåter användarna i den virtuella miljön att nå ut och greppa tag. Tidigare forskning har utforskat en mängd möjliga interaktionstekniker för immersiv VR interaktion, med fokus på de kvantitativa och objektiva faktorerna för varje teknik. Trots denna forskning så är valet av interaktionsteknik för ett givet VR scenario fortfarande en svår uppgift. Denna studie försöker komplementera tidigare forskning genom att granska de mer subjektiva och kvalitativa aspekterna av användbarhet, samtidigt som den nya generationen av handkontroller för VR används. Syftet med studien var att framställa rekommendationer för att underlätta för framtida interaktionsdesigners och forskare inom VR. Två interaktionstekniker (klassisk Go-Go samt strålkastning med fiskerulle) valdes ut för att representera de två mest använda interaktionsmetaforerna för val och manipulering, det vill säga att greppa och att peka. Elva användare rekryterades för att pröva de två interaktionsteknikerna, inom ramen för ett shopping scenario som ursprungligen ingick i ett kommersiellt VR spel. Varje användare ombads att utföra fem uppgifter med varje teknik samtidigt som de “tänkte högt”, vilket följdes av en avslutande intervju. Sessionerna spelades in och analyserades utifrån fem användbarhetsfaktorer. Resultaten visade att användarna föredrog Go-Go, på grund av att dess interaktion ansågs vara mer naturlig. Resultaten bekräftade även ett flertal slutsatser från tidigare forskning kring interaktionstekniker för VR, så som styrkan i naturlig interaktion i situationer som har kapacitet att nå en hög grad av realism och vikten av att visa användarna när interaktionstekniken bryter mot ett realistiskt beteende. Sist men inte minst visade resultaten även på vikten av framtida studier, dels gällande användning av interaktionstekniker över en längre tid och dels gällande hur dessa interaktionstekniker ska kombineras med användargränssnitt.
19

From data exploration to presentation : designing new systems and interaction techniques to enhance the sense-making process / De l'exploration des données à la présentation : concevoir de nouveaux systèmes et techniques d'interaction pour améliorer la création de sens à partir de données

Romat, Hugo 03 October 2019 (has links)
Au cours de la dernière décennie, la quantité de données n'a cessé d'augmenter. Ces données peuvent provenir de sources variées, telles que des smartphones, des enregistreurs audio, des caméras, des capteurs, des simulations, et peuvent avoir différentes structures. Bien que les ordinateurs puissent nous aider à traiter ces données, c'est le jugement et l'expertise humaine qui les transforment réellement en connaissances. Cependant, pour donner un sens à ces données de plus en plus diversifiées, des techniques de visualisation et d'interaction sont nécessaires. Ce travail de thèse contribue de telles techniques pour faciliter l'exploration et la présentation des données, lors d'activités visant à faire sens des données. Dans la première partie de cette thèse, nous nous concentrons sur les systèmes interactifs et les techniques d'interaction pour aider les utilisateurs à faire sens des données. Nous étudions comment les utilisateurs travaillent avec des contenus divers afin de leur permettre d'externaliser leurs pensées par le biais d'annotations digitales. Nous présentons notre approche avec deux systèmes. Le premier, ActiveInk, permet l'utilisation naturelle du stylet pour la lecture active, lors d'un processus d'exploration de données. Dans le cadre d'une étude qualitative menée auprès de huit participants, nous contribuons des observations sur les comportements de la lecture active au cours de l'exploration des données, et, des principes aidant les utilisateurs à faire sens des données.Le second système, SpaceInk, est un espace de conception de techniques en utilisant le stylet et les gestes, qui permet de créer de l'espace pour les annotations, pendant la lecture active, en ajustant dynamiquement le contenu du document. Dans la deuxième partie de cette thèse, nous avons étudié les techniques permettant de représenter visuellement les éléments de réponses aux questions quand les utilisateurs essaient de faire sens des données. Nous nous concentrons sur l'une des structures de données les plus élaborées : les réseaux multi-variés, que nous visualisons à l'aide de diagrammes noeuds-liens. Nous étudions comment permettre un processus de conception itératif flexible lors de la création de diagrammes nœuds-liens pour les réseaux multi-variés. Nous présentons d'abord un système, Graphies, qui permet la création de visualisations expressives de diagrammes noeuds-liens en fournissant aux concepteurs un environnement de travail flexible qui rationalise le processus créatif et offre un support efficace pour les itérations rapides de conception. Allant au-delà de l'utilisation de variables visuelles statiques dans les diagrammes nœuds-liens, nous avons étudié le potentiel des variables liées au mouvement pour encoder les attributs des données. En conclusion, nous montrons dans cette thèse que le processus visant à faire sens des données peut être amélioré à la fois dans le processus d'exploration et de présentation, en utilisant l'annotation comme nouveau moyen de transition entre exploration et externalisation, et en suivant un processus itératif et flexible pour créer des représentations expressives de données. Les systèmes qui en résultent établissent un cadre de recherche où la présentation et l'exploration sont au cœur des systèmes de données visuelles. / During the last decade, the amount of data has been constantly increasing. These data can come from several sources such as smartphones, audio recorders, cameras, sensors, simulations, and can have various structure. While computers can help us process these data, human judgment and domain expertise is what turns the data into actual knowledge. However, making sense of this increasing amount of diverse data requires visualization and interaction techniques. This thesis contributes such techniques to facilitate data exploration and presentation, during sense-making activities. In the first part of this thesis, we focus on interactive systems and interaction techniques to support sense-making activities. We investigate how users work with diverse content in order to make them able to externalize thoughts through digital annotations. We present our approach with two systems. The first system, ActiveInk enables the natural use of pen for active reading during a data exploration process. Through a qualitative study with eight participants, we contribute observations of active reading behaviors during data exploration and design principles to support sense-making. The second system, SpaceInk, is a design space of pen & touch techniques that make space for in-context annotations during active reading by dynamically reflowing documents. In the second part, we focus on techniques to visually represent insights and answers to questions that arise during sense-making activities. We focus on one of the most elaborate data structures: multivariate networks, that we visualize using a node-link diagram visualization. We investigate how to enable a flexible iterative design process when authoring node-link diagrams for multivariate networks. We first present a system, Graphies, that enables the creation of expressive node-link diagram visualizations by providing designers with a flexible workflow that streamlines the creative process, and effectively supports quick design iterations. Moving beyond the use of static visual variables in node-link diagrams, we investigated the use of motion to encode data attributes. To conclude, we show in this thesis that the sense-making process can be enhanced in both processes of exploration and presentation, by using ink as a new medium to transition between exploration and externalization, and by following a flexible, iterative process to create expressive data representations. The resulting systems establish a research framework where presentation and exploration are a core part of visual data systems.
20

Techniques d'interaction exploitant la mémoire pour faciliter l'activation de commandes / Interaction techniques using memory to facilitate command activation

Fruchard, Bruno 11 December 2018 (has links)
Pour contrôler un système interactif, un utilisateur doit habituellement sélectionner des commandes en parcourant des listes et des menus hiérarchiques. Pour les sélectionner plus rapidement, il peut effectuer des raccourcis gestuels. Cependant, pour être efficace, il doit mémoriser ces raccourcis, une tâche difficile s’il doit activer un grand nombre de commandes. Nous étudions dans une première partie les avantages des gestes positionnels (pointage) et directionnels (Marking menus) pour la mémorisation de commandes, ainsi que l’utilisation du corps de l’utilisateur comme surface d’interaction et l’impact de deux types d’aides sémantiques (histoires, images) sur l’efficacité à mémoriser. Nous montrons que les gestes positionnels permettent d’apprendre plus rapidement et plus facilement, et que suggérer aux utilisateurs de créer des histoires liées aux commandes améliore considérablement leurs taux de rappel. Dans une deuxième partie, nous présentons des gestes bi-positionnels qui permettent l’activation d’un grand nombre de commandes. Nous montrons leur efficacité à l’aide de deux contextes d’interaction : le pavé tactile d’un ordinateur portable (MarkPad) et une montre intelligente (SCM). / To control an interactive system, users usually have to select commands by browsing lists and hierarchical menus. To go faster, they can perform gestural shortcuts. However, to be effective, they must memorize these shortcuts, which is a difficult task when activating a large number of commands. In a first part, we study the advantages of positional (pointing) and directional (Marking menus) gestures for command memorization, as well as the use of the user's body as an interaction surface and the impact of two types of semantic aids (stories, images) on the effectiveness to memorize. We show that positional gestures make learning faster and easier, and that suggesting to users to create stories related to commands significantly improves their recall rate. In the second part, we present bi-positional gestures that allow the activation of a large number of commands. We demonstrate their effectiveness using two interaction contexts: the touchpad of a laptop (MarkPad) and a smartwatch (SCM).

Page generated in 0.1301 seconds