• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 255
  • 139
  • 104
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 676
  • 134
  • 124
  • 113
  • 101
  • 98
  • 82
  • 75
  • 71
  • 70
  • 62
  • 57
  • 46
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

What role does effort play: the effect of effort for gesture interfaces and the effect of pointing on spatial working memory

Liu, Xiaoxing 01 August 2016 (has links)
Automatically recognizing gestures of the hand is a promising approach to communicating with computers, particularly when keyboard and mouse interactions are inconvenient, when only a brief interaction is necessary, or when a command involves a three-dimensional, spatial component. Which gestures are most convenient or preferred in various circumstances is unknown. This work explores the idea that perceived physical effort of a hand gesture influences users’ preference for using it when communicating with a computer. First, the hypothesis that people prefer gestures with less effort is tested by measuring the perceived effort and appeal of simple gestures. The results demonstrate that gestures perceived as less effortful are more likely to be accepted and preferred. The second experiment tests similar hypothesis with three-dimensional selection tasks. Participants used the tapping gesture to select among 16 targets in two environments that differ primarily in the physical distance required to finish the task. Participants, again, favor the less effortful environment over the other. Together the experiments suggest that effort is an important factor in user preference for gestures. The effort-to-reliability tradeoff existing in the majority of current gesture interfaces is then studied in experiment 3. Participants are presented 10 different levels of effort-to-reliability tradeoff and decide which tradeoff they prefer. Extreme conditions were intentionally avoided. On average they rate their preferred condition 4.23 in a 10-point scale in terms of perceived effort, and can achieve a success rate of approximately 70%. Finally, the question of whether pointing to objects enhances recall of their visuospatial position in a three-dimensional virtual environment is explored. The results show that pointing actually decreases memory relative to passively viewing. All in all, this work suggests that effort is an important factor, and there is an optimal balance for the effort-to-reliability tradeoff from a user’s perspective. The understanding and careful consideration of this point can help make future gesture interfaces more usable.
242

Le langage des mains dans les arts figurés en France (1604-1795) / Language of the hands in the figurative arts of France (1604-1795)

Dimova, Temenuzhka 26 September 2017 (has links)
Le langage iconographique des mains est un système conventionnel utilisé par les peintres afin de doter les personnages de leurs oeuvres de fonctions discursives, affectives et symboliques particulières. La lecture des gestes figurés nous a ainsi permis de révéler des nouvelles structures narratives. Les différents signes gestuels sont étudiés ici en fonction de leurs origines, usages, connotations et stylistiques dans l’art français des XVIIe et XVIIIe siècles. Dans l’objectif de comprendre le potentiel sémantique de la main et son implication dans les oeuvres d’art, nous avons sollicité des écrits issus de champs épistémiques multiples. Lors de ce travail, nous avons souligné l’importance de la chirologie, discipline explorant les configurations signifiantes des mains et leurs possibles applications. Les gestes picturaux ne fonctionnent pas uniquement de manière isolée mais sont impliqués dans des schémas d’interaction, raccordés au genre et à la composition de l’oeuvre. L’étude du langage des mains favorise le dialogue entre l’histoire de l’art et d’autres disciplines scientifiques, engagées dans des questions de perception, de représentation et de mémoire. / The iconographic language of the hands is a conventional system used by the painters in order to provide some particular discursive, affective and symbolical functions to their characters. In our work, we show that the analysis of the figurative gestures reveals new narrative structures. The different gestural signs are studied according to their origins, usages, connotations and stylistics in the French art of the 17th and 18th centuries. With the aim of understanding the semantic potential of the hand and its implication in the works of art, we referred to multiple epistemic fields. In this study, we underline the importance of the chirology, discipline exploring the meaningful configurations of the hands and their possible applications. The pictorial gestures are not isolated but involved in interactional schemas, connected to the genre and the composition of the work of art. The study of the language of the hands favours the dialogue between the History of art and other scientific disciplines,engaged in the questions of perception, representation and memory.
243

Generalized Conditional Matching Algorithm for Ordered and Unordered Sets

Krishnan, Ravikiran 13 November 2014 (has links)
Designing generalized data-driven distance measures for both ordered and unordered set data is the core focus of the proposed work. An ordered set is a set where time-linear property is maintained when distance between pair of temporal segments. One application in the ordered set is the human gesture analysis from RGBD data. Human gestures are fast becoming the natural form of human computer interaction. This serves as a motivation to modeling, analyzing, and recognition of gestures. The large number of gesture categories such as sign language, traffic signals, everyday actions and also subtle cultural variations in gesture classes makes gesture recognition a challenging problem. As part of generalization, an algorithm is proposed as part of an overlap speech detection application for unordered set. Any gesture recognition task involves comparing an incoming or a query gesture against a training set of gestures. Having one or few samples deters any class statistic learning approaches to classification, as the full range of variation is not covered. Due to the large variability in gesture classes, temporally segmenting individual gestures also becomes hard. A matching algorithm in such scenarios needs to be able to handle single sample classes and have the ability to label multiple gestures without temporal segmentation. Each gesture sequence is considered as a class and each class is a data point on an input space. A pair-wise distances pattern between to gesture frame sequences conditioned on a third (anchor) sequence is considered and is referred to as warp vectors. Such a process is defined as conditional distances. At the algorithmic core we have two dynamic time warping processes, one to compute the warp vectors with the anchor sequences and the other to compare these warp vectors. We show that having class dependent distance function can disambiguate classification process where the samples of classes are close to each other. Given a situation where the model base is large (number of classes is also large); the disadvantage of such a distance would be the computational cost. A distributed version combined with sub-sampling anchor gestures is proposed as speedup strategy. In order to label multiple connected gestures in query we use a simultaneous segmentation and recognition matching algorithm called level building algorithm. We use the dynamic programming implementation of the level building algorithm. The core of this algorithm depends on a distance function that compares two gesture sequences. We propose that, we replace this distance function, with the proposed distances. Hence, this version of level building is called as conditional level building (clb). We present results on a large dataset of 8000 RGBD sequences spanning over 200 gesture classes, extracted from the ChaLearn Gesture Challenge dataset. The result is that there is significant improvement over the underlying distance used to compute conditional distance when compared to conditional distance. As an application of unordered set and non-visual data, overlap speech segment detection algorithm is proposed. Speech recognition systems have a vast variety of application, but fail when there is overlap speech involved. This is especially true in a meeting-room setting. The ability to recognize speaker and localize him/her in the room is an important step towards a higher-level representation of the meeting dynamics. Similar to gesture recognition, a new distance function is defined and it serves as the core of the algorithm to distinguish between individual speech and overlap speech temporal segments. The overlap speech detection problem is framed as outlier detection problem. An incoming audio is broken into temporal segments based on Bayesian Information Criterion (BIC). Each of these segments is considered as node and conditional distance between the nodes are determined. The underlying distances for triples used in conditional distances is the symmetric KL distance. As each node is modeled as a Gaussian, the distance between the two segments or nodes is given by Monte-Carlo estimation of the KL distance. An MDS based global embedding is created based on the pairwise distance between the nodes and RANSAC is applied to compute the outliers. NIST meeting room data set is used to perform experiments on the overlap speech detection. An improvement of more than 20% is achieved with conditional distance based approach when compared to a KL distance based approach.
244

Constructing bodies: gesture, speech and representation at work in architectural design studios

Mewburn, Inger Blackford January 2009 (has links)
Previous studies of the design studio have tended to treat learning to design as a matter of learning to think in the right way, despite the recognition that material artifacts and the ability to make and manipulate them in architectural ways is important to the design process. Through the use of empirical data gathered from watching design teachers and students in action, this thesis works to discover how material things and bodies are important to the fabrication of architectural meaning and architectural subjectivity within design studios. In particular the role of gesture is highlighted as doing important work in design studio knowledge practices. / The approach taken in this thesis is to treat design activity in design studios in a ‘post-human’ way. An analytical eye is turned to how things and people perform together and are organised in various ways, using Actor network theory (ANT) as a way to orientate the investigation. The assumption drawn from ANT is that that architectural meaning, knowledge and identity can positioned as network effects, enacted into being as the design studio is ‘done’ by the various actors — including material things, such as architectural representations, and human behaviours, such as gesture. / Gesture has been largely ignored by design studio researchers, perhaps because it tends to operate below the threshold of conscious awareness. Gesture is difficult to study because the meanings of most gestures produced during conversations are spontaneous and provisional. Despite this humans seem to be good interpreters of gesture. When studied in detail, ongoing design studio activity is found to rely on the intelligibility of gesture done in ‘architectural ways’. The main site for the observation of gesture during this study was the ‘desk crit’ where teachers and students confer about work in progress. In the data gathered for this thesis gesture is found to operate with representations in three key ways: explaining and describing architectural composition, ‘sticking’ spoken meanings strategically to representations and conveying the phenomenological experience of occupying architectural space – the passing of time, quality of light, texture and movement. / Despite the fact that most of the work of the thesis centres on human behaviour, the findings about the role of gesture and representation trouble the idea of the human as being at the centre of the action, putting the bodies of teachers and students amongst a crowd of non human others who participate together in design knowledge making practices.
245

A Framework for Mobile Paper-based Computing

Sylverberg, Tomas January 2007 (has links)
<p>Military work-practice is a difficult area of research where paper-based approaches are still extended. This thesis proposes a solution which permits the digitalization of information at the same time as workpractice remains unaltered for soldiers working with maps in the field. For this purpose, a mobile interactive paper-based platform has been developed which permits the users to maintain their current work-flow. The premise of the solution parts from a system consisting of a prepared paper-map, a cellular phone, a desktop computer, and a digital pen with bluetooth connection. The underlying idea is to permit soldiers to take advantage of the information a computerized system can offer, at the same time as the overhead it incurs is minimized. On one hand this implies that the solution must be light-weight, on the other it must retain current working procedures as far as possible. The desktop computer is used to develop new paper-driven applications through the application provided in the development framework, thus allowing the tailoring of applications to the changing needs of military operations. One major component in the application suite is a symbol recognizer which is capable of recognizing symbols parting from a template which can be created in one of the applications. This component permits the digitalization of information in the battlefield by drawing on the paper-map. The proposed solution has been found to be viable, but still there is a need for further development. Furthermore, there is a need to adapt the existing hardware to the requirements of the military to make it usable in a real-world situation.</p>
246

Mixed reality interactive storytelling : acting with gestures and facial expressions

Martin, Olivier 04 May 2007 (has links)
This thesis aims to answer the following question : “How could gestures and facial expressions be used to control the behavior of an interactive entertaining application?”. An answer to this question is presented and illustrated in the context of mixed reality interactive storytelling. The first part focuses on the description of the Artificial Intelligence (AI) mechanisms that are used to model and control the behavior of the application. We present an efficient real-time hierarchical planning engine, and show how active modalities (such as intentional gestures) and passive modalities (such as facial expressions) can be integrated into the planning algorithm, in such a way that the narrative (driven by the behavior of the virtual characters inside the virtual world) can effectively evolve in accordance with user interactions. The second part is devoted to the automatic recognition of user interactions. After briefly describing the implementation of a simple but robust rule-based gesture recognition system, the emphasis is set on facial expression recognition. A complete solution integrating state-of-the-art techniques along with original contributions is drawn. It includes face detection, facial feature extraction and analysis. The proposed approach combines statistical learning and probabilistic reasoning in order to deal with the uncertainty associated with the process of modeling facial expressions.
247

Structuring information through gesture and intonation

Jannedy, Stefanie, Mendoza-Denton, Norma January 2005 (has links)
Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several "semiotic layers", modalities of information such as syntax, discourse structure, gesture, and intonation. <br>We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices.<br> Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting.
248

A Novel Accelerometer-based Gesture Recognition System

Akl, Ahmad 14 December 2010 (has links)
Gesture Recognition provides an efficient human-computer interaction for interactive and intelligent computing. In this work, we address the problem of gesture recognition using the theory of random projection and by formulating the recognition problem as an $\ell_1$-minimization problem. The gesture recognition uses a single 3-axis accelerometer for data acquisition and comprises two main stages: a training stage and a testing stage. For training, the system employs dynamic time warping as well as affinity propagation to create exemplars for each gesture while for testing, the system projects all candidate traces and also the unknown trace onto the same lower dimensional subspace for recognition. A dictionary of 18 gestures is defined and a database of over 3,700 traces is created from 7 subjects on which the system is tested and evaluated. Simulation results reveal a superior performance, in terms of accuracy and computational complexity, compared to other systems in the literature.
249

A Novel Accelerometer-based Gesture Recognition System

Akl, Ahmad 14 December 2010 (has links)
Gesture Recognition provides an efficient human-computer interaction for interactive and intelligent computing. In this work, we address the problem of gesture recognition using the theory of random projection and by formulating the recognition problem as an $\ell_1$-minimization problem. The gesture recognition uses a single 3-axis accelerometer for data acquisition and comprises two main stages: a training stage and a testing stage. For training, the system employs dynamic time warping as well as affinity propagation to create exemplars for each gesture while for testing, the system projects all candidate traces and also the unknown trace onto the same lower dimensional subspace for recognition. A dictionary of 18 gestures is defined and a database of over 3,700 traces is created from 7 subjects on which the system is tested and evaluated. Simulation results reveal a superior performance, in terms of accuracy and computational complexity, compared to other systems in the literature.
250

PERFORMATIVE GESTURES An Exhibition of Painting

Urbanski, Miranda 29 April 2009 (has links)
My painted self-portraiture explores identity as changing social performance or masquerade and examines bodily flesh as the vital interface for reciprocal encounter on life’s stage. The larger-than-life sized images demand viewer attention and compel intersubjective engagement. The works also affirm artistic agency and subjective presence through gestural brushwork and the vivifying power of oil paint. Hybridity and ambiguity in the images suggest the dynamic and reflexive nature of identity. A theatrical colour palette further reinforces the notion of identity as social performance or masquerade. Conceptually the works are rooted in both post-modern feminism and phenomenology. Artistically they draw inspiration from contemporary figurative painters and portraitists who use this medium and genre to navigate the boundaries of self and society.

Page generated in 0.0385 seconds