• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 41
  • 41
  • 12
  • 11
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multimodal Interaction for Enhancing Team Coordination on the Battlefield

Cummings, Danielle 16 December 2013 (has links)
Team coordination is vital to the success of team missions. On the battlefield and in other hazardous environments, mission outcomes are often very unpredictable because of unforeseen circumstances and complications encountered that adversely affect team coordination. In addition, the battlefield is constantly evolving as new technology, such as context-aware systems and unmanned drones, becomes available to assist teams in coordinating team efforts. As a result, we must re-evaluate the dynamics of teams that operate in high-stress, hazardous environments in order to learn how to use technology to enhance team coordination within this new context. In dangerous environments where multi-tasking is critical for the safety and success of the team operation, it is important to know what forms of interaction are most conducive to team tasks. We have explored interaction methods, including various types of user input and data feedback mediums that can assist teams in performing unified tasks on the battlefield. We’ve conducted an ethnographic analysis of Soldiers and researched technologies such as sketch recognition, physiological data classification, augmented reality, and haptics to come up with a set of core principles to be used when de- signing technological tools for these teams. This dissertation provides support for these principles and addresses outstanding problems of team connectivity, mobility, cognitive load, team awareness, and hands-free interaction in mobile military applications. This research has resulted in the development of a multimodal solution that enhances team coordination by allowing users to synchronize their tasks while keeping an overall awareness of team status and their environment. The set of solutions we’ve developed utilizes optimal interaction techniques implemented and evaluated in related projects; the ultimate goal of this research is to learn how to use technology to provide total situational awareness and team connectivity on the battlefield. This information can be used to aid the research and development of technological solutions for teams that operate in hazardous environments as more advanced resources become available.
2

Multimodal e-assessment : an empirical study

Algahtani, Amirah January 2015 (has links)
Due to the availability of technology, there has been a shift from traditional assessment methods to e-assessment methods designed to support learning. With this development there is a need to address the suitability and effectiveness of the e-assessment interface. One development in the e-assessment interface has been the use of the multimodal metaphor. Unfortunately, the associated effectiveness of multimodality in terms of usability and its suitability in achieving assessment aims has not been fully addressed. Thus, there is a need to determine the impact of multimodality on the effectiveness of e-assessment and to reveal the benefits, primarily to the user. Moreover, those involved in the development and assessment should be aware of potential impacts and benefits. This thesis investigates the role and effectiveness of multimodal metaphors in e-assessment, specifically; the thesis assesses the effect of multimodal metaphors, alone or in combination, on usability in e-assessment. Usability includes efficiency, effectiveness and user satisfaction. The empirical research described in this study consisted of three experiments of 30 participants each to evaluate the effect of description text, avatars and images individually, avatars, description text and recorded speech in combination with images, and finally, the use of avatars with whole body gestures, earcons and auditory icons. The experimental stages were designed as a progression towards the main focus of the study, which was the effectiveness of full body gesture avatar, considered to be the latest development in multimodal metaphors. The experimentation also assessed the role that an avatar could play as a tutor in e-assessment interfaces. The results proved the positive effectiveness and applicability of metaphors to enhance e-assessment usability. This was achieved through a more effective interaction between the user and the assessment interface. A set of empirically derived guidelines for the design and use of these metaphors to enhance e-assessment is also used in order to generate more usable e-assessment interfaces.
3

All the School’s a Stage: A Multimodal Interaction Analysis of a School Administrator’s Literate Life as Dramaturgical Metaphor

Tomlin, Dru D 17 May 2013 (has links)
In Images of Leadership (1991), Bolman and Deal identified four “frames” that school administrators use when making decisions: structural, symbolic, human resource and political. They discovered that the latter two frames, which focus on relationships, partnerships, and communication, were most frequently identified as predicting a school administrator’s success “as both leader and manager”(12). Strikingly, Bolman and Deal found that little emphasis and professional time are afforded to help school administrators learn about these critical frames. While there is ample logistical advice about language use, there is scant research that examines it from a theatrical perspective. The purpose of this autoethnographic study was to examine my literate life as a school administrator through the use of multimodal interaction analysis (Norris, 2004) and dramaturgical metaphors (Goffman, 1959). The study attempted to address the following research questions: (1.) How does my role as a school administrator dramaturgically define the roles I inhabit as I engage in everyday literacy practices in school? and (2.) How do I use language –both verbal and nonverbal language --to negotiate those roles with my various audiences, specifically with teachers and staff, other leaders, students and parents? The participant was myself –in my former role as an assistant principal at a suburban elementary school. Data collection and analysis began in May 2012 and concluded at the end of August 2012. Data for the study was collected through a journal based on questions using dramaturgical terms and a collection of the author’s/participant’s videotaped “performances” with various audiences. The dramaturgical journal was analyzed through Critical Discourse Analysis and deductive coding, while the videotapes were analyzed using Multimodal Interaction Analysis. Poetry was also used throughout the study to include the author’s voice, to recontextualize the experience, and to challenge the traditional prose form. The study revealed the intersection of language and leadership in the life of a school administrator. It also showed how multimodal interaction analysis and dramaturgical metaphors can help educational leaders understand their own literate lives through new lenses and how they can grow from that understanding.
4

Planning and Sequencing Through Multimodal Interaction for Robot Programming

Akan, Batu January 2014 (has links)
Over the past few decades the use of industrial robots has increased the efficiency as well as the competitiveness of several sectors. Despite this fact, in many cases robot automation investments are considered to be technically challenging. In addition, for most small and medium-sized enterprises (SMEs) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods of industrial robots are too complex for most technicians or manufacturing engineers, and thus assistance from a robot programming expert is often needed. The hypothesis is that in order to make the use of industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis, a novel system for task-level programming is proposed. The user interacts with an industrial robot by giving instructions in a structured natural language and by selecting objects through an augmented reality interface. The proposed system consists of two parts: (i) a multimodal framework that provides a natural language interface for the user to interact in which the framework performs modality fusion and semantic analysis, (ii) a symbolic planner, POPStar, to create a time-efficient plan based on the user's instructions. The ultimate goal of this work in this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.This thesis mainly addresses two issues. The first issue is a general framework for designing and developing multimodal interfaces. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline. The framework also includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation. Such a framework helps us to make interaction with a robot easier and more natural. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high-level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than the programming issues of the robot. The second issue addressed is due to inherent characteristics of communication with the use of natural language; instructions given by a user are often vague and may require other actions to be taken before the conditions for applying the user's instructions are met. In order to solve this problem a symbolic planner, POPStar, based on a partial order planner (POP) is proposed. The system takes landmarks extracted from user instructions as input, and creates a sequence of actions to operate the robotic cell with minimal makespan. The proposed planner takes advantage of the partial order capabilities of POP to execute actions in parallel and employs a best-first search algorithm to seek the series of actions that lead to a minimal makespan. The proposed planner can also handle robots with multiple grippers, parallel machines as well as scheduling for multiple product types.
5

On Affective States in Computational Cognitive Practice through Visual and Musical Modalities

Tsoukalas, Kyriakos 29 June 2021 (has links)
Learners' affective states correlate with learning outcomes. A key aspect of instructional design is the choice of modalities by which learners interact with instructional content. The existing literature focuses on quantifying learning outcomes without quantifying learners' affective states during instructional activities. An investigation of how learners feel during instructional activities will inform the instructional systems design methodology of a method for quantifying the effects of individually available modalities on learners' affect. The objective of this dissertation is to investigate the relationship between affective states and learning modalities of instructional computing. During an instructional activity, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in three distinct modalities. The modalities concentrate on visual and musical computing for the practice of computational thinking. An affective model for the practice of computational thinking through musical expression was developed and validated. This dissertation begins with a literature review of relevant theories on embodied cognition, learning, and affective states. It continues with designing and fabricating a prototype instructional apparatus and its virtual simulation as a web service, both for the practice of computational thinking through musical expression, and concludes with a study investigating participants' affective states before and after four distinct online computing activities. This dissertation builds on and contributes to extant literature by validating an affective model for computational thinking practice through self-expression. It also proposes a nomological network for the construct of computational thinking for future exploration of the construct, and develops a method for the assessment of instructional activities based on predefined levels of skill and knowledge. / Doctor of Philosophy / This dissertation investigates the role of learners' affect during instructional activities of visual and musical computing. More specifically, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in four distinct ways. The computing activities are based on a prototype instructional apparatus, which was designed and fabricated for the practice of computational thinking. A study was performed using a virtual simulation accessible via internet browser. The study suggests that maintaining enjoyment during instructional activities is a more direct path to academic motivation than excitement.
6

Everyday interaction in lesbian households : identity work, body behaviour, and action

Viney, Rowena January 2015 (has links)
This thesis is about the resources that speakers can draw on when producing actions, both verbal and non-vocal. It considers how identity categories, gaze and touch can contribute to action in everyday interactions. The study stemmed from an interest in how lesbian identity is made relevant by lesbian speakers in everyday co-present interaction. A corpus of approximately 23.5 hours of video-recordings was gathered: households self-designated as lesbian (including couples, families, and housemates) video recorded some of their everyday interactions (including mealtimes, watching television, and playing board games). Using the tools of Conversation Analysis and working with the video recordings and transcripts of the interactions, several ways of making a lesbian identity relevant through talk were identified. As the analysis progressed, it was found that many references to sexual identity were produced fleetingly; they were not part of or integral to the ongoing talk, and were not taken up as a topic by participants. Rather, this invoking of a participant s sexual identity appears to contribute to a particular action that is being produced. It was found that invokings of other identities, for example relating to occupation, nationality, and race, worked in a similar way, and this is explored in relation to explanations and accounts. Where the first half of the thesis focuses on verbal invokings of identity in relation to action, the second half of the thesis considers some of the non-vocal resources that participants incorporate into their actions. It was found that when launching a topic related to something in the immediate environment, speakers can use gaze to ensure recipiency. Also, when producing potentially face-threatening actions such as teases, reprimands or insults, speakers can use interpersonal touch to mitigate the threat. In addition to showing how identities can be made relevant in everyday interaction, the findings of this thesis highlight the complexity of action design, and that in co-present interaction the physical resources available to participants also need to be taken into account.
7

Multimodal e-learning : an empirical study

Faneer, Musa Khalifa A. January 2015 (has links)
This empirical work aims to investigate the impact of using multimodal communication metaphors on e-learning systems’ usability, overall user experience and affective state. The study proposed a triple evaluation approach to avoid the problem of conventional assessment relying only on usability measurements of efficiency, effectiveness and user satisfactions. Usability in that sense refers only to the functionality and pragmatic side of the product and neglects other aspects of the system. Learning is a cognitive and repetitive task, requiring learners’ attention as well as their interest. Therefore, when delivering content, in addition to the pragmatic functionality, an e-learning system should provide a constructive overall user experience and positive affective state. Doing so will ensure user engagement, facilitate the learning process and increase learners’ performance. The impact of using five different communication metaphors was evaluated in three dimensions using the proposed approach. Within the usability dimension, the evaluation criteria involved measuring system efficiency, effectiveness, user satisfaction and learning performance. Within the user experience dimension, the evaluation criteria involved measuring pragmatic aspects of the user experience, the hedonic aspects of user experience in terms of stimulation as well as identification and the overall system attractiveness. Within the affective state dimension a self-assessments manikin technique was used in conjunction with biofeedback measurements, and users’ valence, arousal and dominance were measured. The study found that system attractiveness and the hedonic user experience had a profound impact on users’ learning performance and attitude toward the tested system. Furthermore, they influenced users’ views and judgement of the system and its usability. The communication metaphors were not equal in their influence within the evaluation criteria. Empirically derived guidelines were produced for the use and integration of these metaphors in e-learning systems. The outcome of the study highlights the need to use the triple evaluation approach in the assessment of e-learning interfaces prior to their release for better adoption and acceptance by end users.
8

A methodology for developing multimodal user interfaces of information systems

Stanciulescu, Adrian 25 June 2008 (has links)
The Graphical User Interface (GUI), as the most prevailing type of User Interface (UI) in today’s interactive applications, restricts the interaction with a computer to the visual modality and is therefore not suited for some users (e.g., with limited literacy or typing skills), in some circumstances (e.g., while moving around, with their hands or eyes busy) or when the environment is constrained (e.g., the keyboard and the mouse are not available). In order to go beyond the GUI constraints, the Multimodal (MM) UIs apear as paradigm that provide users with great expressive power, naturalness and flexibility. In this thesis we argue that developing MM UIs combining graphical and vocal modalities is an activity that could benefit from the application of a methodology which is composed of: a set of models, a method manipulating these models and the tools implementing the method. Therefore, we define a design space-based method that is supported by model-to-model colored transformations in order to obtain MM UIs of information systems. The design space is composed of explicitly defined design options that clarify the development process in a structured way in order to require less design effort. The feasability of the methodology is demonstrated through three case studies with different levels of complexity and coverage. In addition, an empirical study is conducted with end-users in order to measure the relative usability level provided by different design decisions.
9

Design and Evaluation of 3D Multimodal Virtual Environments for Visually Impaired People

Huang, Ying Ying January 2010 (has links)
Spatial information presented visually is not easily accessible to visually impairedusers. Current technologies, such as screen readers, cannot intuitively conveyspatial layout or structure. This lack of overview is an obstacle for a visuallyimpaired user, both when using the computer individually and when collaboratingwith other users. With the development of haptic and audio technologies, it ispossible to let visually impaired users access to three-dimensional (3D) VirtualReality (VR) environments through the senses of touch and hearing.The work presented in this thesis comprises investigations of haptic and audiointeraction for visually impaired computer users in two stages.The first stage of my research focused on collaborations between sighted andblind-folded computer users in a shared virtual environment. One aspect Iconsidered is how different modalities affect one’s awareness of the other’sactions, as well as of one’s own actions, during the work process. The secondaspect I investigated is common ground, i.e. how visually impaired people obtaina common understanding of the elements of their workspace through differentmodalities. A third aspect I looked at was how different modalities affectperceived social presence, i.e. their ability to perceive the other person’sintentions and emotions. Finally, I attempted to understand how human behaviorand efficiency in task performance are affected when different modalities are usedin collaborative situations.The second stage of my research focused on how the visually impaired access3D multimodal virtual environment individually. I conducted two studies basedon two different haptic and audio prototypes concerning understanding the effectof haptic-audio modalities on navigation and interface design. One prototype thatI created was a haptic and audio game, a labyrinth. The other is a virtualsimulation environment based on the real world of Kulturhuset in Stockholm. Oneaspect I investigated in this individual interaction is how it is possible for users toaccess the spatial layout through a multimodal virtual environment. The secondaspect I investigated is usability; how the haptic and audio cues help visuallyimpaired people understand the spatial layout. The third aspect concernsnavigation and cognitive mapping in a multimodal virtual environment.This thesis contributes to the field of human-computer interaction for thevisually impaired with a set of studies of multimodal interactive systems, andbrings new perspectives to the enhancement of understanding real environmentsfor visually impaired users through a haptic and audio virtual computerenvironment. / QC20100701
10

Wearable Computers and Spatial Cognition

Krum, David Michael 23 August 2004 (has links)
Human beings live and work in large and complex environments. It is often difficult for individuals to perceive and understand the structure of these environments. However, the formation of an accurate and reliable cognitive map, a mental model of the environment, is vital for optimal navigation and coordination. The need to develop a reliable cognitive map is common to the average individual as well as workers with more specialized tasks, for example, law enforcement or military personnel who must quickly learn to operate in a new area. In this dissertation, I propose the use of a wearable computer as a platform for a spatial cognition aid. This spatial cognition aid uses terrain visualization software, GPS positioning, orientation sensors, and an eyeglass mounted display to provide an overview of the surrounding environment. While there are a number of similar mobile or wearable computer systems that function as tourist guides, navigation aids, and surveying tools, there are few examples of spatial cognition aids. I present an architecture for the wearable computer based spatial cognition aid using a relationship mediation model for wearable computer applications. The relationship mediation model identifies and describes the user relationships in which a wearable computer can participate and mediate. The dissertation focuses on how the wearable computer mediates the users perception of the environment. Other components such as interaction techniques and a scalable system of servers for distributing spatial information are also discussed. Several user studies were performed to determine an effective presentation for the spatial cognition aid. Participants were led through an outdoor environment while using different presentations on a wearable computer. The spatial learning of the participants was compared. These studies demonstrated that a wearable computer can be an effective spatial cognition aid. However, factors such as such as mental rotation, cognitive load, distraction, and divided attention must be taken into account when presenting spatial information to a wearable computer user.

Page generated in 0.1217 seconds