• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • 2
  • Tagged with
  • 9
  • 9
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interaction with a 3D modeling tool through a gestural interface : An Evaluation of Effectiveness and Quality

Gustavsson, David January 2014 (has links)
Context. Gestural interfaces involves the ability of technology identifying and recognizing human body language and then interpret this into commands. This is usually used to ease our everyday life, but also to increase usability in for example mobile phones. Objectives. In this study the use of a gestural interface is evaluated as an interaction method to facilitate the introduction of new and novice users to 3D modeling tools. A gestural interface might reduce the modeling time without making an impact on the quality of the result. Methods. A gestural interface is designed and implemented based on previous research regarding gestural interfaces. Time and quality results are gathered through an experiment where participants are to complete a set of tasks in the modeling tool Autodesk Maya that relates to modeling. These tasks are executed in both the implemented gestural interfaces as well as the standard interface of Autodesk Maya. User experience is also gathered through the use of a SUS questionnaire. Results. 17 participants took part in the experiment. Each participant generated time and quality results for each task of the experiment for each interface. For each interface the user experience was recorded through a SUS questionnaire. Conclusions. The results showed that the use of a gestural interface did increase the modeling time for the users, indicating that the use of a gestural interface was not preferable as an interaction medium. The results did show that the time difference between the interfaces was reduced for each completed task, indicating that the gestural interface might have an increase in learnability of the software. The same indication were given from the SUS questionnaire. Also, the results did not show any impact on the quality.
2

Detecção de gestos manuais utilizando câmeras de profundidade / Detection of hand gestures using depth cameras

Prado Neto, Elias Ximenes do 28 May 2014 (has links)
É descrito o projeto de um sistema baseado em visão computacional, para o reconhecimento de poses manuais distintas, além da discriminação e rastreamento de seus membros. Entre os requisitos prioritários deste software estão a eficácia e a eficiência para essas tarefas, de forma a possibilitar o controle em tempo real de sistemas computacionais, por meio de gestos de mãos. Além desses fatores, a portabilidade para outros dispositivos e plataformas computacionais, e a possibilidade de extensão da quantidade de poses iniciais, também consiste em condições importantes para a sua funcionalidade. Essas características tendem a promover a popularização da interface proposta, possibilitando a sua aplicação para diversas finalidades e situações; contribuindo dessa forma para a difusão deste tipo de tecnologia e o desenvolvimento das áreas de interfaces gestuais e visão computacional. Vários métodos foram desenvolvidos e pesquisados com base na metodologia de extração de características, utilizando algoritmos de processamento de imagens, análise de vídeo, e visão computacional, além de softwares de aprendizado de máquina para classificação de imagens. Como dispositivo de captura, foi selecionada uma câmera de profundidade, visando obter informações auxiliares aos vários processos associados, reduzindo assim os custos computacionais inerentes e possibilitando a manipulação de sistemas eletrônicos em espaços virtuais tridimensionais. Por meio desse dispositivo, foram filmados alguns voluntários, realizando as poses manuais propostas, de forma a validar os algoritmos desenvolvidos e possibilitar o treinamento dos classificadores utilizados. Esse registro foi necessário, já que não foram encontradas bases de dados disponíveis contendo imagens com informações adequadas para os métodos pesquisados. Por fim, foi desenvolvido um conjunto de métodos capaz de atingir esses objetivos, através de sua combinação para adequação a diferentes dispositivos e tarefas, abrangendo assim todos os requisitos identificados inicialmente. Além do sistema implementado, a publicação da base de imagens de poses de mãos produzida também consiste em uma contribuição para as áreas do conhecimento associadas a este trabalho. Uma vez que as pesquisas realizadas indicam que esta base corresponde ao primeiro conjunto de dados disponibilizado, compatíveis com vários métodos de detecção de gestos manuais por visão computacional, acredita-se que esta venha a auxiliar ao desenvolvimento de softwares com finalidades semelhantes, além possibilitar uma comparação adequada entre o desempenho desses, por meio de sua utilização. / A project of a computer vision based system is described here, for the recognition of different kinds of hand poses, in addition to the discrimination and tracking of its members. Among the software requirements priority, were the efficiency and effectiveness in these tasks, in order to enable the real time control of computer systems by hand gestures. Besides these features, the portability to various devices and computational platforms, and the extension possibility of initial pose number, are also importants conditions for its functionality. Several methods have been developed and researched, based on the methodology of feature extraction, using image processing, video analysis, and computer vision algorithms; in addition to machine learning software for image classification. As capture device, was selected a depth camera, in order to obtain helper information to several associated processes, so reducing the computational costs involved, and enabling handling electronic systems in three-dimensional virtual spaces. Through this device, some volunteers were recorded, performing the proposed hand poses, in order to validate the developed algorithms and to allow the used classifiers training. This record was required, since available databases containing images with relevant information for researched methods was not found. Finally, were developed a set of methods able to achieve these goals, through its combination for adaptation to different devices and tasks, thus covering all requirements initially identified. Besides the developed system, the publication of the hand poses image database produced, is also an contribution to the field of knowledge related with this work. Since the researches carried out indicated that this database is the first set of available data, compatible with different computer vision detection methods for hand gestures, it\'s believed that this will assist in developing software with similar purposes, besides permit a proper comparison of the performances, by means of its use.
3

Detecção de gestos manuais utilizando câmeras de profundidade / Detection of hand gestures using depth cameras

Elias Ximenes do Prado Neto 28 May 2014 (has links)
É descrito o projeto de um sistema baseado em visão computacional, para o reconhecimento de poses manuais distintas, além da discriminação e rastreamento de seus membros. Entre os requisitos prioritários deste software estão a eficácia e a eficiência para essas tarefas, de forma a possibilitar o controle em tempo real de sistemas computacionais, por meio de gestos de mãos. Além desses fatores, a portabilidade para outros dispositivos e plataformas computacionais, e a possibilidade de extensão da quantidade de poses iniciais, também consiste em condições importantes para a sua funcionalidade. Essas características tendem a promover a popularização da interface proposta, possibilitando a sua aplicação para diversas finalidades e situações; contribuindo dessa forma para a difusão deste tipo de tecnologia e o desenvolvimento das áreas de interfaces gestuais e visão computacional. Vários métodos foram desenvolvidos e pesquisados com base na metodologia de extração de características, utilizando algoritmos de processamento de imagens, análise de vídeo, e visão computacional, além de softwares de aprendizado de máquina para classificação de imagens. Como dispositivo de captura, foi selecionada uma câmera de profundidade, visando obter informações auxiliares aos vários processos associados, reduzindo assim os custos computacionais inerentes e possibilitando a manipulação de sistemas eletrônicos em espaços virtuais tridimensionais. Por meio desse dispositivo, foram filmados alguns voluntários, realizando as poses manuais propostas, de forma a validar os algoritmos desenvolvidos e possibilitar o treinamento dos classificadores utilizados. Esse registro foi necessário, já que não foram encontradas bases de dados disponíveis contendo imagens com informações adequadas para os métodos pesquisados. Por fim, foi desenvolvido um conjunto de métodos capaz de atingir esses objetivos, através de sua combinação para adequação a diferentes dispositivos e tarefas, abrangendo assim todos os requisitos identificados inicialmente. Além do sistema implementado, a publicação da base de imagens de poses de mãos produzida também consiste em uma contribuição para as áreas do conhecimento associadas a este trabalho. Uma vez que as pesquisas realizadas indicam que esta base corresponde ao primeiro conjunto de dados disponibilizado, compatíveis com vários métodos de detecção de gestos manuais por visão computacional, acredita-se que esta venha a auxiliar ao desenvolvimento de softwares com finalidades semelhantes, além possibilitar uma comparação adequada entre o desempenho desses, por meio de sua utilização. / A project of a computer vision based system is described here, for the recognition of different kinds of hand poses, in addition to the discrimination and tracking of its members. Among the software requirements priority, were the efficiency and effectiveness in these tasks, in order to enable the real time control of computer systems by hand gestures. Besides these features, the portability to various devices and computational platforms, and the extension possibility of initial pose number, are also importants conditions for its functionality. Several methods have been developed and researched, based on the methodology of feature extraction, using image processing, video analysis, and computer vision algorithms; in addition to machine learning software for image classification. As capture device, was selected a depth camera, in order to obtain helper information to several associated processes, so reducing the computational costs involved, and enabling handling electronic systems in three-dimensional virtual spaces. Through this device, some volunteers were recorded, performing the proposed hand poses, in order to validate the developed algorithms and to allow the used classifiers training. This record was required, since available databases containing images with relevant information for researched methods was not found. Finally, were developed a set of methods able to achieve these goals, through its combination for adaptation to different devices and tasks, thus covering all requirements initially identified. Besides the developed system, the publication of the hand poses image database produced, is also an contribution to the field of knowledge related with this work. Since the researches carried out indicated that this database is the first set of available data, compatible with different computer vision detection methods for hand gestures, it\'s believed that this will assist in developing software with similar purposes, besides permit a proper comparison of the performances, by means of its use.
4

Chanter avec les mains : interfaces chironomiques pour les instruments de musique numériques / Singing with hands : chironomic interfaces for digital musical instruments

Perrotin, Olivier 23 September 2015 (has links)
Le travail de cette thèse porte sur l'étude du contrôle en temps réel de synthèse de voix chantée par une tablette graphique dans le cadre de l'instrument de musique numérique Cantor Digitalis.La pertinence de l'utilisation d'une telle interface pour le contrôle de l'intonation vocale a été traitée en premier lieu, démontrant que la tablette permet un contrôle de la hauteur mélodique plus précis que la voix réelle en situation expérimentale.Pour étendre la justesse du jeu à toutes situations, une méthode de correction dynamique de l'intonation a été développée, permettant de jouer en dessous du seuil de perception de justesse et préservant en même temps l'expressivité du musicien. Des évaluations objective et perceptive ont permis de valider l'efficacité de cette méthode.L'utilisation de nouvelles interfaces pour la musique pose la question des modalités impliquées dans le jeu de l'instrument. Une troisième étude révèle une prépondérance de la perception visuelle sur la perception auditive pour le contrôle de l'intonation, due à l'introduction d'indices visuels sur la surface de la tablette. Néanmoins, celle-ci est compensée par l'important pouvoir expressif de l'interface.En effet, la maîtrise de l'écriture ou du dessin dès l'enfance permet l'acquisition rapide d'un contrôle expert de l'instrument. Pour formaliser ce contrôle, nous proposons une suite de gestes adaptés à différents effets musicaux rencontrés dans la musique vocale. Enfin, une pratique intensive de l'instrument est réalisée au sein de l'ensemble Chorus Digitalis à des fins de test et de diffusion. Un travail de recherche artistique est conduit tant dans la mise en scène que dans le choix du répertoire musical à associer à l'instrument. De plus, un retour visuel dédié au public a été développé, afin d'aider à la compréhension du maniement de l'instrument. / This thesis deals with the real-time control of singing voice synthesis by a graphic tablet, based on the digital musical instrument Cantor Digitalis.The relevance of the graphic tablet for the intonation control is first considered, showing that the tablet provides a more precise pitch control than real voice in experimental conditions.To extend the accuracy of control to any situation, a dynamic pitch warping method for intonation correction is developed. It enables to play under the pitch perception limens preserving at the same time the musician's expressivity. Objective and perceptive evaluations validate the method efficiency.The use of new interfaces for musical expression raises the question of the modalities implied in the playing of the instrument. A third study reveals a preponderance of the visual modality over the auditive perception for the intonation control, due to the introduction of visual clues on the tablet surface. Nevertheless, this is compensated by the expressivity allowed by the interface.The writing or drawing ability acquired since early childhood enables a quick acquisition of an expert control of the instrument. An ensemble of gestures dedicated to the control of different vocal effects is suggested.Finally, an intensive practice of the instrument is made through the Chorus Digitalis ensemble, to test and promote our work. An artistic research has been conducted for the choice of the Cantor Digitalis' musical repertoire. Moreover, a visual feedback dedicated to the audience has been developed, extending the perception of the players' pitch and articulation.
5

Low Cost Open Source Modal Virtual Environment Interfaces Using Full Body Motion Tracking and Hand Gesture Recognition

Marangoni, Matthew J. 25 May 2013 (has links)
No description available.
6

Évaluation des jeux Kinect à l’aide du suivi physiologique, du suivi oculaire et des réactions faciales du joueur.

Hua, Tran Nguyen Khoi 08 1900 (has links)
Les jeux vidéo à interface gestuelle permettent des interactions intéressantes entre le joueur et le jeu. Pour évaluer ce nouveau type de jeux vidéo, la méthode d'évaluation subjective courante serait insuffisante (Mandryk, R. L., Inkpen, K. M., et Calvert, T. W., 2006). Notre recherche essaie d'associer l'évaluation objective à l'évaluation subjective pour mesurer la qualité d'immersion des jeux vidéo conçus pour la Kinect — un accessoire de la console Xbox de Microsoft, permettant de jouer sans la manette. Notre corpus comporte 18 sujets (joueurs intensifs et occasionnels) et 3 jeux Kinect (Body and Brain Connection, Child of Eden et Joy Ride). Notre objectif est de développer une méthode d’évaluation la qualité d'immersion du jeu vidéo à l’interface gestuelle. Nous nous sommes basé d’une part sur un questionnaire conçu à partir des critères d’évaluation du Flux dans le jeu vidéo de Sweetser et Wyeth (2005) et des principes d’utilisabilité (Nielsen, 1994a, b; Bastien et Scapin, 1993; Johnson et Wiles, 2003), avec un questionnaire adapté par les chercheurs du DESS Design de jeu de l’Université de Montréal. Et d'autre part, nous nous avons intégré la mesure des réactions physiologiques des joueurs (la réponse galvanique de la peau, le pouls du volume sanguin, la respiration), les réactions oculaires (le diamètre des pupilles, le temps de fixation) et les expressions faciales (la joie, la tristesse, la colère, la peur, la surprise et le dégoût) du joueur. Nous nous sommes appuyé sur des postulats théoriques provenant du domaine de l'interaction humain-ordinateur et du design des jeux vidéo. Nous avons étudié en particulier les réactions physiologiques, oculaires, les expressions faciales et nous avons cherché à faire le lien avec les notions de présence et d'immersion dans le domaine des jeux vidéo. L’analyse des résultats a montré des corrélations entre les réactions physiologiques, oculaires et les réponses subjectives des participants aux questionnaires. Par exemple on observe une corrélation négative entre la pression sanguine (BVP) et la concentration du joueur, une corrélation positive entre la respiration et le diamètre des pupilles et le sentiment d'immersion du joueur, etc. Ces résultats permettraient de confirmer la faisabilité de notre méthode d’évaluation. i Ensuite, nous avons comparé les trois jeux en fonction des composantes de l’immersion afin de trouver le jeu le plus immersif. Le résultat a montré que le jeu Body and Brain Connection était le plus prisé par les participants et que le niveau de défi bien calibré et la facilité de contrôle étaient les deux facteurs principaux de la réussite du jeu Body and Brain. Nous avons également comparé les joueurs intensifs et les joueurs occasionnels en fonction des composantes de l’immersion pour voir la différence de point de vue des joueurs sur la Kinect. Le résultat a montré qu’il n’y avait pas de différence entre les deux types de joueurs. / The gesture-based video games offers interesting new ways of interacting between the player and the game. In order to evaluate this new type of games, current subjective methods of evaluating games isn't sufficiently robust. Our research tries to combine the objective and subjective methods for measuring the qualities of immersion of video games played with Kinect - an accessory for Xbox console from Microsoft, to play without the controller. Our corpus consists of 18 subjects (intensive and casual players) and 3 Kinect games. Our goal is to develop a method for assessing the quality of gesture-based video games. We relied in part on a questionnaire developed from the criteria for player enjoyment in games (Sweetser and Wyeth, 2005) and the usability questionnaire (Nielsen, 1994a, b; Bastien et Scapin, 1993; Johnson et Wiles, 2003), designed by the researchers of DESS Design de jeux, University of Montreal. And secondly, we integrated the measures of physiological responses (the galvanic skin response, the blood volume pulse, the respiration), the ocular reactions (the pupil’s diameter, fixation time) and facial expressions (joy, sadness, anger, fear, surprise and disgust) of the player. We relied on theoretical assumptions from the field of human-computer interaction and games design. We studied in particular physiological responses, eye movements, facial expressions and the notions of presence and immersion in video games. The analysis shows correlations between the physiological reactions, the eye movements and the player's subjective responses. For example, we observed negative correlation between blood volume pulse (BVP) and the concentration of the player, positive correlations between respiration and pupil diameter and the sense of immersion the player, etc. These results would confirm the feasibility of our evaluation method. Then, we compared these three games in terms of five components of immersion to find the most immersive game. The result showed that the game Body and Brain Connection was most appreciated by the participants and that the level of challenge properly calibrated and the ease of control were the two main factors of success of the game Body and Brain. iii We also compared intensive gamers and casual players according to the components of immersion to see the difference between their attitudes toward the Kinect. The result showed that there was no difference between the two types of players. / Les données sont analysées par le logiciel conçu par François Courtemanche et Féthi Guerdelli. L'expérimentation des jeux a eu lieu au Laboratoire de recherche en communication multimédia de l'Université de Montréal.
7

Évaluation des jeux Kinect à l’aide du suivi physiologique, du suivi oculaire et des réactions faciales du joueur

Hua, Tran Nguyen Khoi 08 1900 (has links)
No description available.
8

The Challenge of Designing Gestures for Interaction

Eriksson, Anette, Svensson, Caroline January 2001 (has links)
The main interfaces for interaction with computers today are; keyboard, mouse and remote control. In order to interact with the presentation software Power Point, the presenter has to focus either on the computer or the buttons on the remote control. By doing this, the presenter often loses the contact with his audience and his or her flow of speech gets interrupted. This project has researched the possibility of using gestures for interaction with Power Point, by using an appliance that detects gestures. The purpose was that the interaction should be possible to realise by software, which we have done an introductory design of. We have focused on assisting presenters when they use Power Point and other applications when delivering presentation. To collect data and get an understanding of presenters, presentations and gestures we have observed presenters in action, done workshops together with future users and tested some gestures in real life. These are methods inspired from approaches such as ethnographic fieldwork and participatory design. During the whole project we have used video recording to collect and save data. To create an understanding and clear picture of what the future software should include UML-diagrams were used. We have separated gestures in two categories; natural and designed. The natural gestures occur naturally during speech and social interaction, while the designed gestures are gestures that you learn to use and express, often to perform a task. We discovered that it was the designed gestures that are best suited for gestural interaction with computer. Since the designed gestures are close to the natural way of gesturing we see them as easier to learn, remember and also more comfortable to use. We think the designed gestures have the potential to become second nature, therefore they are good to use for interaction with computers. Our research work led us to realise a need for an on/off function, to distinguish the designed gestures from the natural ones. By using a gestural interface during a presentation, the presenters can keep the focus on the audience and the message they want to convey. When gestural interfaces become reality they will introduce a paradigm shift in the way that people interact with computers and information. / Anette Eriksson 0457-12196 Caroline Svensson 0410-24148
9

L’espace du geste-son, vers une nouvelle pratique performative

Héon-Morissette, Barah 05 1900 (has links)
Cette thèse en recherche-création est une réflexion sur l’espace du geste-son. La dé- marche artistique de l’auteure, reposant sur six éléments : le corps, le son, le geste, l’image vidéo, l’espace physique et l’espace technologique, a été intégrée dans la conception d’un système de captation de mouvement en vision par ordinateur, le SICMAP (Système In- teractif de Captation du Mouvement en Art Performatif). Cette approche propose une nouvelle pratique performative hybride. Dans un premier temps, l’auteure situe sa démarche artistique en s’appuyant sur les trois piliers de la méthodologie transdisciplinaire : les niveaux de Réalité et de perception (le corps et l’espace-matière), la logique du tiers inclus (l’espace du geste-son) et la com- plexité (éléments du processus de création). Ces concepts transdisciplinaires sont ensuite mis en relation à travers l’analyse d’œuvres arborant un élément commun à la démarche de l’auteure, soit le corps au centre d’un univers sensoriel. L’auteure met ensuite en lumière des éléments relatifs à la pratique scénique susci- tée par cette démarche artistique innovante à travers le corps expressif. Le parcours du performeur-créateur, menant à la conception du SICMAP, est ensuite exposé en passant par une réflexion sur l’« instrument rêvé » et la réalisation de deux interfaces gestuelles pré- paratoires. Sous-entendant une nouvelle gestuelle dans un contexte d’interface sans retour haptique, la typologie du geste instrumental est revisitée dans une approche correspondant au nouveau paradigme de l’espace du geste-son. En réponse à ces recherches, les détails de la mise en œuvre du SICMAP sont ensuite présentés sous l’angle de l’espace technologique et de l’application de l’espace du geste- son. Puis, les compositions réalisées lors du développement du SICMAP sont décrites d’un point de vue artistique et poïétique à travers les éléments fondateurs du processus de création de l’auteure. La conclusion résume les objectifs de cette recherche-création ainsi que les contributions de cette nouvelle pratique performative hybride. / This research-creation thesis is a reflection on the gesture-sound space. The author’s artistic research, based on six elements: body, sound, gesture, video, physical space, and technological space, was integrated in the conception of a motion capture system based on computer vision, the SICMAP (Système Interactif de Captation du Mouvement en Art Performatif – Interactive Motion Capture System For Performative Arts). This approach proposes a new performative hybrid practice. In the first part, the author situates her artistic practice supported by the three pillars of transdisciplinary research methodology: the levels of Reality and perception (the body and space as matter), the logic of the included middle (gesture-sound space) and the com- plexity (elements of the creative process). These transdisciplinary concepts are juxtaposed through the analysis of works bearing a common element to the author’s artistic practice, the body at the center of a sensorial universe. The author then puts forth elements relative to scenic practice arisen by this innovative artistic practice through the expressive body. The path taken by the performer-creator, leading to the conception of the SICMAP, is then explained through a reflection on the “dream instrument” and the realization of two preparatory gestural interfaces. Implying a new gestural in the context of a non-haptic interface that of the free-body gesture, the topology of the instrumental gesture is revisited in response to a new paradigm of the gesture-sound space. In reply to this research, the details of the SICMAP are then presented from the angle of the technological space and then applied to the gesture-sound space. The compositions realized during the development of SICMAP are then presented. These works are discussed from an artistic and poietic point of view through the founding elements of the author’s creative process. The conclusion summarises the objectives of this research-creation as well as the contributions of this new performative hybrid practice.

Page generated in 0.0685 seconds