• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interação gestual sem dispositivos para displays públicos. / Deviceless gestural interaction aimed to public displays

Motta, Thiago Stein January 2013 (has links)
Com o constante crescimento tecnológico, é bastante comum deparar-se com um display público em lugares de grande concentração de pessoas, como aeroportos e cinemas. Apesar de possuírem informações úteis, esses displays poderiam ser melhor aproveitados se fossem interativos. Baseando-se em pesquisas sobre a interação com displays grandes e as características próprias de um display colocado em um espaço público, busca-se uma maneira de interação que seja adequada a esse tipo de situação. O presente trabalho introduz um método de interação por gestos sem necessitar que o usuário interagente segure ou tenha nele acoplado qualquer dispositivo ao interagir com um display público. Para realizar as tarefas que deseja, o usuário só precisa posicionar-se frente ao display e interagir com as informações na tela com suas mãos. São suportados gestos para navegação, seleção e manipulação de objetos, bem como para transladar a tela de visualização e ampliá-la ou diminui-la. O sistema proposto é construído de forma que possa funcionar em aplicações diferentes sem um grande custo de implantação. Para isso, é utilizado um sistema do tipo cliente-servidor que integra a aplicação que contém as informações de interesse do usuário e a que interpreta os seus gestos. É utilizado o Microsoft Kinect para a leitura dos movimentos do usuário e um pós-processamento de imagens é realizado de modo a detectar se as mãos do usuário se encontram abertas os fechadas. Após, essa informação é interpretada por uma máquina de estados que identifica o que o usuário está querendo executar na aplicação cliente. Afim de avaliar o quão robusto o sistema se portaria em um ambiente público real, são avaliados critérios que poderiam interferir na tarefa interativa, como a diferença de luminosidade do ambiente e a presença de mais pessoas no mesmo local de interação. Foram desenvolvidas três aplicações a título de estudo de caso e cada uma delas foi avaliada de forma diferente, sendo uma delas utiliza para fins de avaliação formal com usuários. Demonstrados os resultados da avaliação realizada, conclui-se que o sistema, apesar de não se portar corretamente em todas situações, tem potencial de uso desde que sejam contornadas suas deficiências, a maior parte das quais originária das próprias limitações inerentes ao Kinect. O sistema proposto funciona suficientemente bem para seleção e manipulação de objetos grandes e para aplicações baseadas em interação do tipo pan & zoom, como navegação em mapas, por exemplo, e não é influenciado por diferenças de iluminação ou presença de outras pessoas no ambiente. / With the constant technological growth, it is quite common to come across a public display in places with high concentration of people, such as airports and theaters. Although they provide useful information, these displays could be better employed if they were interactive. Based on research on topics of interaction with large displays and the characteristics of a display placed in a public space, a way of interaction that is suitable for this kind of situation is searched. This paper introduces a method of interaction by gestures without requiring that the interacting user take hold or have to him attached any device to interact with a public display. To accomplish the tasks he wants, he needs just to position himself in front of the display and to interact with the information on the screen with his hands. Gestures supported provide navigation, selection and manipulation of objects as well as to pan and zoom at the screen. The proposed system is constructed so that it works in different applications without a large installation cost. In order to achieve this, the system implements a client-server model application that is able to integrate the part that contains the useful information to the user and the one that interprets his gestures. The Microsoft Kinect is used for reading the user’s movements and techniques of image processing are performed to detect if the user’s hands are open or closed. After this information is obtained, it runs through a state machine that identifies what the user is trying to do in the application. In order to evaluate how robust the system is in a real public environment, some criteria that could interfere with the interactive task are evaluated, as the difference in brightness in the environment and the presence of another people in the same place of interaction. Three applications were developed as a case study and each one was evaluated differently, one of them being used for formal user evaluation. Given the results of the performed tasks, it is possible to conclude that the system, although not behaving correctly in all situations, has potential use if its difficulties are circumvented, most of which come from Kinect’s own inherent limitations. The proposed system works well enough for selection and manipulation of large objects and for use in applications based on pan & zoom, like those that supports map navigation, for example, and difference of ilumination or the presence of other persons on the environment does not interfere with the interaction process.
2

Gestural interaction techniques for handheld devices combining accelerometers and multipoint touch screens

Scoditti, Adriano 28 September 2011 (has links) (PDF)
In this thesis, we address the question of gesture interaction on mobile device. These devices, now common, differ from conventional computers primarily by the input devices the user interact with (screen size small but tactile, various sensors such as accelerometers) as well as the context in which they are used. The work presented here is an exploration of the vast area of interaction techniques on these mobile devices. First we try to structure this space by focusing on the techniques based on accelerometers for which we propose a taxonomy. Its descriptive and discriminant power is validated by and the classification of thirty-seven interaction techniques in the literature. Second we focus on the achievement of gestural interaction techniques for these mobile devices. With TouchOver, we show that it is possible to take advantage of complementary two-channels input (touch screen and accelerometers) to add a state to the finger-drag, thus enriching the interaction. Finally, we focus on mobile device menus and offer a new form of sign language menus. We discuss their implementation with the GeLATI software library that allows their integration into a pre-existing GUI toolkit.
3

Interação gestual sem dispositivos para displays públicos. / Deviceless gestural interaction aimed to public displays

Motta, Thiago Stein January 2013 (has links)
Com o constante crescimento tecnológico, é bastante comum deparar-se com um display público em lugares de grande concentração de pessoas, como aeroportos e cinemas. Apesar de possuírem informações úteis, esses displays poderiam ser melhor aproveitados se fossem interativos. Baseando-se em pesquisas sobre a interação com displays grandes e as características próprias de um display colocado em um espaço público, busca-se uma maneira de interação que seja adequada a esse tipo de situação. O presente trabalho introduz um método de interação por gestos sem necessitar que o usuário interagente segure ou tenha nele acoplado qualquer dispositivo ao interagir com um display público. Para realizar as tarefas que deseja, o usuário só precisa posicionar-se frente ao display e interagir com as informações na tela com suas mãos. São suportados gestos para navegação, seleção e manipulação de objetos, bem como para transladar a tela de visualização e ampliá-la ou diminui-la. O sistema proposto é construído de forma que possa funcionar em aplicações diferentes sem um grande custo de implantação. Para isso, é utilizado um sistema do tipo cliente-servidor que integra a aplicação que contém as informações de interesse do usuário e a que interpreta os seus gestos. É utilizado o Microsoft Kinect para a leitura dos movimentos do usuário e um pós-processamento de imagens é realizado de modo a detectar se as mãos do usuário se encontram abertas os fechadas. Após, essa informação é interpretada por uma máquina de estados que identifica o que o usuário está querendo executar na aplicação cliente. Afim de avaliar o quão robusto o sistema se portaria em um ambiente público real, são avaliados critérios que poderiam interferir na tarefa interativa, como a diferença de luminosidade do ambiente e a presença de mais pessoas no mesmo local de interação. Foram desenvolvidas três aplicações a título de estudo de caso e cada uma delas foi avaliada de forma diferente, sendo uma delas utiliza para fins de avaliação formal com usuários. Demonstrados os resultados da avaliação realizada, conclui-se que o sistema, apesar de não se portar corretamente em todas situações, tem potencial de uso desde que sejam contornadas suas deficiências, a maior parte das quais originária das próprias limitações inerentes ao Kinect. O sistema proposto funciona suficientemente bem para seleção e manipulação de objetos grandes e para aplicações baseadas em interação do tipo pan & zoom, como navegação em mapas, por exemplo, e não é influenciado por diferenças de iluminação ou presença de outras pessoas no ambiente. / With the constant technological growth, it is quite common to come across a public display in places with high concentration of people, such as airports and theaters. Although they provide useful information, these displays could be better employed if they were interactive. Based on research on topics of interaction with large displays and the characteristics of a display placed in a public space, a way of interaction that is suitable for this kind of situation is searched. This paper introduces a method of interaction by gestures without requiring that the interacting user take hold or have to him attached any device to interact with a public display. To accomplish the tasks he wants, he needs just to position himself in front of the display and to interact with the information on the screen with his hands. Gestures supported provide navigation, selection and manipulation of objects as well as to pan and zoom at the screen. The proposed system is constructed so that it works in different applications without a large installation cost. In order to achieve this, the system implements a client-server model application that is able to integrate the part that contains the useful information to the user and the one that interprets his gestures. The Microsoft Kinect is used for reading the user’s movements and techniques of image processing are performed to detect if the user’s hands are open or closed. After this information is obtained, it runs through a state machine that identifies what the user is trying to do in the application. In order to evaluate how robust the system is in a real public environment, some criteria that could interfere with the interactive task are evaluated, as the difference in brightness in the environment and the presence of another people in the same place of interaction. Three applications were developed as a case study and each one was evaluated differently, one of them being used for formal user evaluation. Given the results of the performed tasks, it is possible to conclude that the system, although not behaving correctly in all situations, has potential use if its difficulties are circumvented, most of which come from Kinect’s own inherent limitations. The proposed system works well enough for selection and manipulation of large objects and for use in applications based on pan & zoom, like those that supports map navigation, for example, and difference of ilumination or the presence of other persons on the environment does not interfere with the interaction process.
4

Interação gestual sem dispositivos para displays públicos. / Deviceless gestural interaction aimed to public displays

Motta, Thiago Stein January 2013 (has links)
Com o constante crescimento tecnológico, é bastante comum deparar-se com um display público em lugares de grande concentração de pessoas, como aeroportos e cinemas. Apesar de possuírem informações úteis, esses displays poderiam ser melhor aproveitados se fossem interativos. Baseando-se em pesquisas sobre a interação com displays grandes e as características próprias de um display colocado em um espaço público, busca-se uma maneira de interação que seja adequada a esse tipo de situação. O presente trabalho introduz um método de interação por gestos sem necessitar que o usuário interagente segure ou tenha nele acoplado qualquer dispositivo ao interagir com um display público. Para realizar as tarefas que deseja, o usuário só precisa posicionar-se frente ao display e interagir com as informações na tela com suas mãos. São suportados gestos para navegação, seleção e manipulação de objetos, bem como para transladar a tela de visualização e ampliá-la ou diminui-la. O sistema proposto é construído de forma que possa funcionar em aplicações diferentes sem um grande custo de implantação. Para isso, é utilizado um sistema do tipo cliente-servidor que integra a aplicação que contém as informações de interesse do usuário e a que interpreta os seus gestos. É utilizado o Microsoft Kinect para a leitura dos movimentos do usuário e um pós-processamento de imagens é realizado de modo a detectar se as mãos do usuário se encontram abertas os fechadas. Após, essa informação é interpretada por uma máquina de estados que identifica o que o usuário está querendo executar na aplicação cliente. Afim de avaliar o quão robusto o sistema se portaria em um ambiente público real, são avaliados critérios que poderiam interferir na tarefa interativa, como a diferença de luminosidade do ambiente e a presença de mais pessoas no mesmo local de interação. Foram desenvolvidas três aplicações a título de estudo de caso e cada uma delas foi avaliada de forma diferente, sendo uma delas utiliza para fins de avaliação formal com usuários. Demonstrados os resultados da avaliação realizada, conclui-se que o sistema, apesar de não se portar corretamente em todas situações, tem potencial de uso desde que sejam contornadas suas deficiências, a maior parte das quais originária das próprias limitações inerentes ao Kinect. O sistema proposto funciona suficientemente bem para seleção e manipulação de objetos grandes e para aplicações baseadas em interação do tipo pan & zoom, como navegação em mapas, por exemplo, e não é influenciado por diferenças de iluminação ou presença de outras pessoas no ambiente. / With the constant technological growth, it is quite common to come across a public display in places with high concentration of people, such as airports and theaters. Although they provide useful information, these displays could be better employed if they were interactive. Based on research on topics of interaction with large displays and the characteristics of a display placed in a public space, a way of interaction that is suitable for this kind of situation is searched. This paper introduces a method of interaction by gestures without requiring that the interacting user take hold or have to him attached any device to interact with a public display. To accomplish the tasks he wants, he needs just to position himself in front of the display and to interact with the information on the screen with his hands. Gestures supported provide navigation, selection and manipulation of objects as well as to pan and zoom at the screen. The proposed system is constructed so that it works in different applications without a large installation cost. In order to achieve this, the system implements a client-server model application that is able to integrate the part that contains the useful information to the user and the one that interprets his gestures. The Microsoft Kinect is used for reading the user’s movements and techniques of image processing are performed to detect if the user’s hands are open or closed. After this information is obtained, it runs through a state machine that identifies what the user is trying to do in the application. In order to evaluate how robust the system is in a real public environment, some criteria that could interfere with the interactive task are evaluated, as the difference in brightness in the environment and the presence of another people in the same place of interaction. Three applications were developed as a case study and each one was evaluated differently, one of them being used for formal user evaluation. Given the results of the performed tasks, it is possible to conclude that the system, although not behaving correctly in all situations, has potential use if its difficulties are circumvented, most of which come from Kinect’s own inherent limitations. The proposed system works well enough for selection and manipulation of large objects and for use in applications based on pan & zoom, like those that supports map navigation, for example, and difference of ilumination or the presence of other persons on the environment does not interfere with the interaction process.
5

Understanding interaction mechanics in touchless target selection

Chattopadhyay, Debaleena 28 July 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / We use gestures frequently in daily life—to interact with people, pets, or objects. But interacting with computers using mid-air gestures continues to challenge the design of touchless systems. Traditional approaches to touchless interaction focus on exploring gesture inputs and evaluating user interfaces. I shift the focus from gesture elicitation and interface evaluation to touchless interaction mechanics. I argue for a novel approach to generate design guidelines for touchless systems: to use fundamental interaction principles, instead of a reactive adaptation to the sensing technology. In five sets of experiments, I explore visual and pseudo-haptic feedback, motor intuitiveness, handedness, and perceptual Gestalt effects. Particularly, I study the interaction mechanics in touchless target selection. To that end, I introduce two novel interaction techniques: touchless circular menus that allow command selection using directional strokes and interface topographies that use pseudo-haptic feedback to guide steering–targeting tasks. Results illuminate different facets of touchless interaction mechanics. For example, motor-intuitive touchless interactions explain how our sensorimotor abilities inform touchless interface affordances: we often make a holistic oblique gesture instead of several orthogonal hand gestures while reaching toward a distant display. Following the Gestalt theory of visual perception, we found similarity between user interface (UI) components decreased user accuracy while good continuity made users faster. Other findings include hemispheric asymmetry affecting transfer of training between dominant and nondominant hands and pseudo-haptic feedback improving touchless accuracy. The results of this dissertation contribute design guidelines for future touchless systems. Practical applications of this work include the use of touchless interaction techniques in various domains, such as entertainment, consumer appliances, surgery, patient-centric health settings, smart cities, interactive visualization, and collaboration.
6

Towards a Richer Interaction Space with Gestural Interaction on Synthesizers

Sjöö, Anton January 2022 (has links)
The synthesizer is a highly complex artefact. It is frequently employed in the music industry and renowned for its varied sound qualities. It employs rudimentary components such as buttons and knobs and is thus not equally renowned for its rich interactive possibilities. Through explorative research, the thesis identifies that interaction on synthesizers has stagnated and explores, through a lens of interaction design, how designers might utilize interactions such as gestures to control sound parameters for musical expression and how these interactions can affect this expression. Through design activities such as prototyping, this thesis reveals that gestures and hand movements feel natural and easy to grasp for musicians, and that it can change their way of playing.
7

Hacking the Gestures of Past for Future Interactions

Atılım, Şahin January 2013 (has links)
This study proposes a new “vocabulary” of gestural commands for mobile devices, based on established bodily practices and daily rituals. The research approach is grounded in a theoretical framework of phenomenology, and entails collaborative improv workshops akin to bodystorming. The combination of these methods is named as “hacking the physical actions” and the significance of this approach is highlighted, especially as a constituting source for the similar researches in this field. The resulting ideas for gestural commands are then synthesized and applied to fundamental tasks of handling mobile phones and explained with a supplementary video.
8

Context-aware gestural interaction in the smart environments of the ubiquitous computing era

Caon, Maurizio January 2014 (has links)
Technology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms.
9

Gestural interaction techniques for handheld devices combining accelerometers and multipoint touch screens / Techniques d'interaction gestuelles pour dispositifs mobiles combinant accéléromètres et écrans tactiles multipoints

Scoditti, Adriano 28 September 2011 (has links)
Dans cette thèse, j'aborde la question de l'interaction gestuelle sur dispositif mobile. Ces dispositifs, à présent communs, se distinguent des ordinateurs conventionnels principalement par leurs périphériques d'interaction avec l'utilisateur (écrans de taille restreinte mais tactiles, capteurs divers tels que les accéléromètres) ainsi que par le contexte dans lequel ils sont utilisés. Le travail que je présente est une exploration du vaste domaine des techniques d'interaction sur ces dispositifs mobiles. Je structure cet espace en me concentrant sur les techniques à base d'accéléromètres pour lesquelles je propose une taxonomie. Son pouvoir descriptif et discriminant est validé par la classification de trente-sept techniques d'interaction de la littérature. La suite de mon travail se penche sur la réalisation de techniques d'interaction gestuelles pour ces dispositifs mobiles. Avec TouchOver, je montre qu'il est possible de tirer parti de manière complémentaire de deux canaux d'entrée (écran tactile et accéléromètres) pour ajouter un état au glissé du doigt, permettant ainsi d'enrichir cette interaction. Enfin, je m'intéresse aux menus sur dispositif mobile et propose une nouvelle forme de menus gestuels. Je présente leur réalisation avec la bibliothèque logicielle GeLATI qui permet leur intégration à une boîte à outils de développement d'interface graphique préexistante. * / In this thesis, we address the question of gesture interaction on mobile device. These devices, now common, differ from conventional computers primarily by the input devices the user interact with (screen size small but tactile, various sensors such as accelerometers) as well as the context in which they are used. The work presented here is an exploration of the vast area of interaction techniques on these mobile devices. First we try to structure this space by focusing on the techniques based on accelerometers for which we propose a taxonomy. Its descriptive and discriminant power is validated by and the classification of thirty-seven interaction techniques in the literature. Second we focus on the achievement of gestural interaction techniques for these mobile devices. With TouchOver, we show that it is possible to take advantage of complementary two-channels input (touch screen and accelerometers) to add a state to the finger-drag, thus enriching the interaction. Finally, we focus on mobile device menus and offer a new form of sign language menus. We discuss their implementation with the GeLATI software library that allows their integration into a pre-existing GUI toolkit.
10

Interaction basée sur des gestes définis par l’utilisateur : Application à la réalité virtuelle / User-Defined Gestural Interaction for Virtual Reality

Jego, Jean-François 12 December 2013 (has links)
Ces travaux de recherche proposent une nouvelle méthode d'interaction gestuelle. Nous nous intéressons en particulier à deux domaines d'application : la thérapie à domicile par la réalité virtuelle et les arts scéniques numériques. Nous partons du constat que les interfaces standardisées ne sont pas adaptées à ces cas d'usage, car elles font appel à des gestes prédéfinis et imposés. Notre approche consiste à laisser la possibilité à l'utilisateur de faire apprendre ses gestes d'interaction par le système. Cela permet de prendre en compte ses besoins et aptitudes sensorimotrices. L'utilisateur réutilise ensuite son langage gestuel appris par le système pour interagir dans l'environnement virtuel. Cette approche pose des questions de recherche centrées sur la mémorisation, la rétroaction visuelle et la fatigue. Pour aborder ces trois aspects, nous étudions d'abord le rôle de l'affordance visuelle des objets et celui de la colocalisation dans la remémoration d'un corpus de gestes. Ensuite, nous évaluons l'influence de différents types de rétroactions visuelles sur l'évolution des gestes répétés par l'utilisateur dans une série de tâches de manipulation. Nous comparons également les performances entre des gestes d'amplitude réaliste et des gestes d'amplitude faible pour effectuer la même action. Aussi, nous attachons une importance à rendre l'interface accessible en utilisant des dispositifs bas coûts et peu intrusifs. Nous explorons les moyens de pallier les contraintes techniques liées aux systèmes peu performants. Pour cela, nous avons conduit des expériences où plus de six mille gestes proposés par une quarantaine d'utilisateurs ont été étudiés. / In this thesis, we propose and evaluate new gestural interfaces for 3DUI. This work is motivated by two application cases: the first one is dedicated to people with limited sensory-motor abilities for whom generic interaction methods may not be adapted. The second one is artistic digital performances, for which gesture freedom is part of the creative process. For those cases, a standardized approach is not possible and thus user-specific or dedicated interfaces are needed. We propose a user-defined gestural interaction that allows the user to make the system learn the gestures that he has created, in a specific phase, prior to using the system. Then, the user reuses his created gestures to interact in the virtual environment. This approach raises research questions about memorization of gestures, effects of fatigue and effects of visual feedbacks. To answer those questions, we study the memorization of user created gestures regarding the role of affordances and colocalization on gesture recall. Then, we study the role of different visual feedbacks on gesture repetitions for a set of manipulation tasks. We also compare full-collocated gestures to loose gestures with lower amplitude. Also, the approach has been designed to be affordable using low-cost devices. We explore solutions to deal with the lower data quality of such devices. The results of the user-studies are based on the analysis of six thousand gestures performed by forty subjects.

Page generated in 0.1498 seconds