• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 255
  • 139
  • 104
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 676
  • 134
  • 124
  • 113
  • 101
  • 98
  • 82
  • 75
  • 71
  • 70
  • 62
  • 57
  • 46
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Increase Driving Situation Awareness and In-vehicle Gesture-based Menu Navigation Accuracy with Heads-Up Display

Cao, Yusheng 04 1900 (has links)
More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. This thesis designed and tested a novel interface to solve a specific issue caused by this method of interaction: visual distraction within the car. In this study, a Heads-Up Display (HUD) was integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment was conducted to investigate the potential of this system in improving drivers’ driving performance, situation awareness, and gesture interactions. The thesis recruited 24 participants to test the system. Participants provided subjective feedback about using the system and objective performance data. This thesis found that HUD significantly outperformed the Heads-Down Display (HDD) in participants’ preference, perceived workload, level 1 situation awareness, and secondary-task performance. However, to achieve this, the participants compensated by having poor driving performance and relatively longer visual distraction. This thesis will provide directions for future research and improve the overall user experience while the driver interacts with the in-vehicle gesture interaction system. / M.S. / Driving is becoming one of the essential daily activities. Unless a fully autonomous vehicle is made, driving will remain as the primary task when operating the vehicle. However, to improve the overall experience during traveling, drivers are also required to perform secondary tasks such as changing the AC, switching the music, navigating the map, and other functions. Nevertheless, car accidents may happen when drivers are performing secondary tasks because those tasks are considered a distraction from the primary task, which is driving safely. Many novel interaction methods have been implemented in a modern car, such as touch screen interaction, voice interaction, etc. This thesis introduces a new gesture interaction system that allows the user to use mid-air gestures to navigate through the secondary task menus. To further avoid visual distraction caused by the system, the gesture interaction system integrated a head-up display (HUD) to allow the user to see visual feedback on their front windshield. The HUD will let the driver use the system without looking in the other directions and keep peripheral vision on the road. The experiment recruited 24 participants to test the system. Each participant provided subjective feedback about their workload, experience, and preference. In the experiment, driving simulator was used to collect their driving performance. The eye tracker glasses were used to collect eye gaze data, and the gesture menu system was used to collect gesture system performance. This thesis expects four key factors to affect the user experience: HUD vs. Heads-Down Display (visual feedback types), with sound feedback vs. without sound feedback. Results showed that HUD helped the driver perform the secondary task faster, understand the current situation better, and reduce workload. Most of the participants preferred using the HUD over using HDD. However, there are some compensations that drivers needed to make if they use HUD: focusing on the HUD for more time while performing secondary tasks and having poor driving performance. By analyzing result data, this thesis provides a direction for conducting HUD or in-vehicle gesture interaction research and improving the users’ performance and overall experience.
182

Gesture politics and the art of ambiguity: the Iron Age statue from Hirschlanden

Armit, Ian, Grant, P. January 2008 (has links)
No / The discovery of the extraordinary Hirschlanden figure was reported in this journal in 1964. Since then the statue has featured in numerous discussions of Iron Age art and society, to the extent that it has become one of the iconic images of the European Iron Age. It has become almost taken for granted that the Hirschlanden figure is an `intensely masculine¿ warrior statue representing the heroised dead. However, certain aspects of the figure suggest a rather deeper, more ambiguous symbolism. The authors use their up-to-date critique to raise questions about the eclectic character of Iron Age spirituality.
183

Gestural communication in orangutans (Pongo pygmaeus and Pongo abelii) : a cognitive approach

Cartmill, Erica A. January 2009 (has links)
While most human language is expressed verbally, the gestures produced concurrent to speech provide additional information, help listeners interpret meaning, and provide insight into the cognitive processes of the speaker. Several theories have suggested that gesture played an important, possibly central, role in the evolution of language. Great apes have been shown to use gestures flexibly in different situations and to modify their gestures in response to changing contexts. However, it has not previously been determined whether ape gestures are defined by structural variables, carry meaning, are used to intentionally communicate specific information to others, or can be used strategically to overcome miscommunication. To investigate these questions, I studied three captive populations of orangutans (Pongo pygmaeus and P. abelii) in European zoos for 10 months. Sixty-four different gestures, defined through similarities in structure and use, were included in the study after meeting strict criteria for intentional usage. More than half of the gesture types were found to coincide frequently with specific goals of signallers, and were accordingly identified as having meanings. Both structural and social variables were found to determine gesture meaning. The recipient’s gaze in both the present and the past, and the recipient’s apparent understanding of the signaller’s gestures, affected the strategies orangutans employed in their attempts to communicate when confronted with different types of communicative failure (e.g. not seeing, ignoring, misunderstanding, or rejecting a gesture). Maternal influence affected the object-directed behaviour and gestures of infants, who shared more gestures with their mothers than with other females. These findings demonstrate that gesture can be used as a medium to investigate not only the communication but also the cognition of great apes, and indicate that orangutans are more sensitive to the perceptions and knowledge states of others than previously thought.
184

Social acceptability of wearable technology use in public: an exploration of the societal perceptions of a gesture-based mobile textile interface

Profita, Halley P. 23 May 2011 (has links)
Textile forms of wearable technology offer the potential for users to interact with electronic devices in a whole new manner. However, the operation of a wearable system can result in non-traditional on-body interactions (including gestural commands) that users may not be comfortable with performing in a public setting. Understanding the societal perceptions of gesture-based interactions will ultimately impact how readily a new form of mobile technology will be adopted within society. The goal of this research is to assess the social acceptability of a user's interaction with an electronic textile wearable interface. Two means of interaction were studied: the first was to assess the most acceptable input method for the interface (tapping, sliding, circular rotation); and the second assessment was to measure the social acceptability of a user interacting with the detachable textile interface at different locations on the body. The study recruited participants who strictly identified themselves as being of American nationality so as to gain insight into the culture-specific perceptions of interacting with a wearable form of technology.
185

A Generic Gesture Recognition Approach based on Visual Perception

Hu, Gang 22 June 2012 (has links)
Current developments of hardware devices have allowed the computer vision technologies to analyze complex human activities in real time. High quality computer algorithms for human activity interpretation are required by many emerging applications, such as patient behavior analysis, surveillance, gesture control video games, and other human computer interface systems. Despite great efforts that have been made in the past decades, it is still a challenging task to provide a generic gesture recognition solution that can facilitate the developments of different gesture-based applications. Human vision is able to perceive scenes continuously, recognize objects and grasp motion semantics effortlessly. Neuroscientists and psychologists have tried to understand and explain how exactly the visual system works. Some theories/hypotheses on visual perception such as the visual attention and the Gestalt Laws of perceptual organization (PO) have been established and shed some light on understanding fundamental mechanisms of human visual perception. In this dissertation, inspired by those visual attention models, we attempt to model and integrate important visual perception discoveries into a generic gesture recognition framework, which is the fundamental component of full-tier human activity understanding tasks. Our approach handles challenging tasks by: (1) organizing the complex visual information into a hierarchical structure including low-level feature, object (human body), and 4D spatiotemporal layers; 2) extracting bottom-up shape-based visual salience entities at each layer according to PO grouping laws; 3) building shape-based hierarchical salience maps in favor of high-level tasks for visual feature selection by manipulating attention conditions of the top-down knowledge about gestures and body structures; and 4) modeling gesture representations by a set of perceptual gesture salience entities (PGSEs) that provide qualitative gesture descriptions in 4D space for recognition tasks. Unlike other existing approaches, our gesture representation method encodes both extrinsic and intrinsic properties and reflects the way humans perceive the visual world so as to reduce the semantic gaps. Experimental results show our approach outperforms the others and has great potential in real-time applications. / PhD Thesis
186

La paramétrisation du geste dans les formes musicales scéniques : L'exemple du théâtre musical contemporain : état de l'art, historiographie, analyse / The gesture’s parameterisation in scenic musical forms : Example of contemporary music theatre : state of the art, historiography, analysis

Délécraz, Cyril 11 July 2019 (has links)
Au cours du XXe siècle, le geste est progressivement devenu un paramètre de composition musicale dans les répertoires de tradition savante occidentale. Les avant-gardes (Dada, futurisme, Bauhaus, expressionnisme, etc.) sont à l’origine du concept de performance tandis que le bruitisme élargit le domaine des sons esthétiquement audibles. Rejetant le grand opéra romantique et tous ses attributs (technique de chant lyrique, suprématie du texte dramatique sur la musique, fosse d’orchestre, illusion de la réalité, opulence orchestrale, entractes, etc.) les compositeurs ressentent le besoin de s’exprimer autrement, tandis que la capacité de renouvellement du système tonal s’étiole. Ils définissent de nouvelles formes musicales scéniques dans un cadre dicté par la performance et selon une nouvelle conscience de l’écoute issue de nombreuses expérimentations réalisées en studio. C’est au lendemain de la seconde guerre, c’est-à-dire lors de l’avènement de la musique concrète, que la voix s’impose non plus seulement comme le vecteur de la parole mais également comme un générateur de sons, que l’espace scénique se vide de l’interprète et que, en retour, certains compositeurs (Mauricio Kagel, Dieter Schnebel, Luciano Berio, John Cage, pour ne citer qu’eux) proposent des pièces hybrides où la musique côtoie la diction d’un récitant, la pantomime ou l’action théâtrale sans que l’une des composantes ne prenne constamment le dessus sur les autres. Selon une approche empirique du corps médiatisé par le truchement de la captation vidéo (sur un corpus s’étalant de 1960 à 2016), il est possible de mettre en lumière des contrepoints d’informations provenant de plusieurs éléments des performances enregistrées (musique, gestes, déplacements, lumière, etc.), qui construisent en fin de compte une réalité signifiante pour l’auditeur-spectateur. En analysant directement les oeuvres performées, il apparait que le geste est en effet un élément structurel de caractère spectaculaire qui participe à la création de formes discursives et, par-là, il constitue un élément indispensable de la réception. En se focalisant sur la paramétrisation du geste, le présent travail apporte un premier élément de réponse à la question suivante : comment la performance musicale, dans son aspect vivant, intègre le geste et comment celui-ci influe-t-il la forme ? Il espère ainsi ouvrir la voie à une musicologie du théâtre musical contemporain. / Over the course of the 20th century, gesture gradually became a parameter of musical composition in the repertoires of the Western classical tradition. The avant-garde (Dada, futurism, Bauhaus, expressionism, etc.) is at the origin of performance concept while noise music broadens the field of aesthetically audible sounds. Rejecting the romantic opera and all its attributes (lyrical singing technique, supremacy of the dramatic text over music, orchestra pit, the illusion of the reality, orchestral opulence, intermissions, etc.), composers feel the need to express themselves differently, while the capacity for renewal of the tonal system wanes. They define new scenic musical forms in a framework dictated by performance and in accordance with a new awareness of listening resulting from numerous experiments carried out in the musical studios. After World War II, with the advent of concrete music, the voice became not only the vector of speech but also a generator of sound, the scenic space was rid of the performer and, in return, some composers (Mauricio Kagel, Dieter Schnebel, Luciano Berio, John Cage, to name but a few) offered hybrid pieces where music is combined with the diction of a narrator, pantomime or theatrical action, without one component constantly getting the upper hand. According to an empirical approach to the body mediated through video recording (based on a corpus stretching from 1960 to 2016), it is possible to highlight counterpoints of information from several elements of the recorded performances (music, gestures, movements, light, etc.), which ultimately construct a meaningful reality for the listener-spectator. By directly analysing the performative works, it appears that the gesture is indeed a structural element of a spectacular nature that participates in the creation of discursive forms and, consequently constituting an indispensable element of reception. By focusing on the parameterisation of the gesture, this work provides a first element of answer to the following question: how does musical performance, in its living aspect, integrate the gesture and how does it influence the form? In this way, it hopes to pave the way for a musicology of contemporary music theatre.
187

Game Accessibility for Children with Cognitive Disabilities : Comparing Gesture-based and Touch Interaction

Gauci, Francesca January 2021 (has links)
The interest in video games has grown substantially over the years, transforming from a means of recreation to one of the most dominating fields in entertainment. However, a significant number of individuals face several obstacles when playing games due to disabilities. While efforts towards more accessible game experiences have increased, cognitive disabilities have been often neglected, partly because games targeting cognitive disabilities are some of the most difficult to design, since cognitive accessibility barriers can be present at any part of the game. In recent years, research in human-computer interaction has explored gesture-based technologies and interaction, especially in the context of games and virtual reality. Research on gesture-based interaction has concentrated on providing a new form of interaction for people with cognitive disabilities. Several studies have shown that gesture interaction may provide several benefits to individuals with cognitive disabilities, including increased cognitive, motor and social aptitudes. This study aims to explore the impact of gesture-based interaction on the. accessibility of video games for children with cognitive disabilities. Accessibility of gesture interaction is evaluated against touch interaction as the baseline, a comparison founded on previous studies that have argued for the high level of accessibility and universal availability of touchscreen devices. As such, a game prototype was custom designed and developed to support both types of interaction, gestures and touch. The game was presented to several users during an interaction study, where every user played the game with both methods of interaction. The game and outcome of the user interaction study were further discussed with field experts. This study contributes towards a better understanding of how gesture interaction impacts the accessibility in games for children with cognitive disabilities. This study concludes that there are certain drawbacks with gesture-based games, especially with regards to precision, accuracy, and ergonomics. As a result, the majority of users preferred the touch interaction method. Nevertheless, some users also considered the gesture game to be a fun experience. Further, discussion with experts produces several points of improvement to make gesture interaction more accessible. The findings of the study are a departure point for a deeper analysis of gestures and how they can be integrated into the gaming world.
188

Gesture and speech in the oral narratives of Sesotho and Mamelodi Lingo speakers

Ntuli, Nonhlanhla January 2016 (has links)
Dissertation submitted to the Department of African Languages and Linguistics in fulfilment of the requirement for Master of Art's Degree in Humanities The University of the Witwatersrand, School of Literature, Language and Media, March 2016 / The gradual decline in the use of Black South African languages (BSALs) has been a concern for the past 20 years in both the South African civil population and academia. The last census data of 2011 informs this phenomenon by showing how language use has changed nationally over the years. In an effort to counter this decline, some researchers have called for the improvement of existing non-standard language varieties, which could serve to improve some of these decreasing Black South African languages (Ditsele, 2014). Non-standard language varieties are ‘languages’ largely spoken in black townships around South Africa. They are sometimes referred to as stylects, sociolets or speech varieties, due to their structures and functions (Bembe & Beukes, 2007). Applying a psycholinguistic approach, this study seeks to compare the standard language Sesotho to a non-standard language variety, Mamelodi Lingo. This study looks at the discursive behaviour focusing on speech and gesture. Previous literature on South African language varieties focuses on the semantic and pragmatic description of the words in use (Calteaux, 1996; Hurst, 2008; 2015; Rudwick, 2005; Ditsele, 2014), and very few have incorporated co-speech gesture, which form an integral part of non-language varieties (Brookes, 2001; 2005). The present study presents the results of an empirical investigation that compares 20 narratives produced by Sesotho and Mamelodi Lingo speakers. Using the methodology used in the elicitation of speech and gesture by Colletta et al., (2009; 2015), participants watched a speechless short cartoon and were then asked to retell the story they had seen to the interviewer. Using the language annotation tool, ELAN narratives were annotated for language complexity, length, and type of clause, syntax, as well as story grammar memory-recall. Narratives were also annotated for gesture: type of gesture and function of gesture. The focus was on the discursive performance of speech and gesture. Results show a significant use of meta-narrative clauses from the language variety compared to the standard language as well as a higher use of non-representational gestures by the non-standard language. The findings also show an interesting use of interactive co-speech gestures when retrieving lexical items that are not present in the repertoire of Mamelodi Lingo / GR2017
189

A influência do contexto de discurso na segmentação automática das fases do gesto com aprendizado de máquina supervisionado / The influence of the speech context on the automatic segmentation of the phases of the gesture with supervised machine learning

Rocha, Jallysson Miranda 27 April 2018 (has links)
Gestos são ações que fazem parte da comunicação humana. Frequentemente, eles ocorrem junto com a fala e podem se manifestar por uma ação proposital, como o uso das mãos para explicar o formato de um objeto, ou como um padrão de comportamento, como coçar a cabeça ou ajeitar os óculos. Os gestos ajudam o locutor a construir sua fala e também ajudam o ouvinte a compreender a mensagem que está sendo transmitida. Pesquisadores de diversas áreas são interessados em entender como se dá a relação dos gestos com outros elementos do sistema linguístico, seja para suportar estudos das áreas da Linguística e da Psicolinguística, seja para melhorar a interação homem-máquina. Há diferentes linhas de estudo que exploram essa temática e entre elas está aquela que analisa os gestos a partir de fases: preparação, pré-stroke hold, stroke, pós-stroke hold, hold e retração. Assim, faz-se útil o desenvolvimento de sistemas capazes de automatizar a segmentação de um gesto em suas fases. Técnicas de aprendizado de máquina supervisionado já foram aplicadas a este problema e resultados promissores foram obtidos. Contudo, há uma dificuldade inerente à análise das fases do gesto, a qual se manifesta na alteração do contexto em que os gestos são executados. Embora existam algumas premissas básicas para definição do padrão de manifestação de cada fase do gesto, em contextos diferentes tais premissas podem sofrer variações que levariam a análise automática para um nível de alta complexidade. Este é o problema abordado neste trabalho, a qual estudou a variabilidade do padrão inerente à cada uma das fases do gesto, com apoio de aprendizado de máquina, quando a manifestação delas se dá a partir de um mesmo indivíduo, porém em diferentes contextos de produção do discurso. Os contextos de discurso considerados neste estudo são: contação de história, improvisação, descrição de cenas, entrevistas e aulas expositivas / Gestures are actions that make part of human communication. Commonly, gestures occur at the same time as the speech and they can manifest either through an intentional act, as using the hands to explain the format of an object, or as a pattern of behavior, as scratching the head or adjusting the glasses. Gestures help the speaker to build their speech and also help the audience to understand the message being communicated. Researchers from several areas are interested in understanding what the relationship of gestures with other elements of the linguistic system is like, whether in supporting studies in Linguistics or Psycho linguistics, or in improving the human-machine interaction. There are different lines of study that explore such a subject, and among them is the line that analyzes gestures according to their phases: preparation, pre-stroke hold, stroke, post-stroke hold, hold and retraction. Thus, the development of systems capable of automating the segmentation of gestures into their phases can be useful. Techniques that implement supervised machine learning have already been applied in this problem and promising results have been achieved. However, there is an inherent difficulty to the analysis of phases of gesture that is revealed when the context (in which the gestures are performed) changes. Although there are some elementary premises to set the pattern of expression of each gesture phase, such premises may vary and lead the automatic analysis to high levels of complexity. Such an issue is addressed in the work herein, whose purpose was to study the variability of the inherent pattern of each gesture phase, using machine learning techniques, when their execution is made by the same person, but in different contexts. The contexts of discourse considered in this study are: storytelling, improvisation, description of scenes, interviews and lectures
190

Gestures in human-robot interaction

Bodiroža, Saša 16 February 2017 (has links)
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt. / Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.

Page generated in 0.0468 seconds