Spelling suggestions: "subject:"[een] MULTIMODAL INTERACTION"" "subject:"[enn] MULTIMODAL INTERACTION""
1 |
Multimodal e-assessment : an empirical studyAlgahtani, Amirah January 2015 (has links)
Due to the availability of technology, there has been a shift from traditional assessment methods to e-assessment methods designed to support learning. With this development there is a need to address the suitability and effectiveness of the e-assessment interface. One development in the e-assessment interface has been the use of the multimodal metaphor. Unfortunately, the associated effectiveness of multimodality in terms of usability and its suitability in achieving assessment aims has not been fully addressed. Thus, there is a need to determine the impact of multimodality on the effectiveness of e-assessment and to reveal the benefits, primarily to the user. Moreover, those involved in the development and assessment should be aware of potential impacts and benefits. This thesis investigates the role and effectiveness of multimodal metaphors in e-assessment, specifically; the thesis assesses the effect of multimodal metaphors, alone or in combination, on usability in e-assessment. Usability includes efficiency, effectiveness and user satisfaction. The empirical research described in this study consisted of three experiments of 30 participants each to evaluate the effect of description text, avatars and images individually, avatars, description text and recorded speech in combination with images, and finally, the use of avatars with whole body gestures, earcons and auditory icons. The experimental stages were designed as a progression towards the main focus of the study, which was the effectiveness of full body gesture avatar, considered to be the latest development in multimodal metaphors. The experimentation also assessed the role that an avatar could play as a tutor in e-assessment interfaces. The results proved the positive effectiveness and applicability of metaphors to enhance e-assessment usability. This was achieved through a more effective interaction between the user and the assessment interface. A set of empirically derived guidelines for the design and use of these metaphors to enhance e-assessment is also used in order to generate more usable e-assessment interfaces.
|
2 |
All the School’s a Stage: A Multimodal Interaction Analysis of a School Administrator’s Literate Life as Dramaturgical MetaphorTomlin, Dru D 17 May 2013 (has links)
In Images of Leadership (1991), Bolman and Deal identified four “frames” that school administrators use when making decisions: structural, symbolic, human resource and political. They discovered that the latter two frames, which focus on relationships, partnerships, and communication, were most frequently identified as predicting a school administrator’s success “as both leader and manager”(12). Strikingly, Bolman and Deal found that little emphasis and professional time are afforded to help school administrators learn about these critical frames. While there is ample logistical advice about language use, there is scant research that examines it from a theatrical perspective.
The purpose of this autoethnographic study was to examine my literate life as a school administrator through the use of multimodal interaction analysis (Norris, 2004) and dramaturgical metaphors (Goffman, 1959). The study attempted to address the following research questions: (1.) How does my role as a school administrator dramaturgically define the roles I inhabit as I engage in everyday literacy practices in school? and (2.) How do I use language –both verbal and nonverbal language --to negotiate those roles with my various audiences, specifically with teachers and staff, other leaders, students and parents?
The participant was myself –in my former role as an assistant principal at a suburban elementary school. Data collection and analysis began in May 2012 and concluded at the end of August 2012. Data for the study was collected through a journal based on questions using dramaturgical terms and a collection of the author’s/participant’s videotaped “performances” with various audiences. The dramaturgical journal was analyzed through Critical Discourse Analysis and deductive coding, while the videotapes were analyzed using Multimodal Interaction Analysis. Poetry was also used throughout the study to include the author’s voice, to recontextualize the experience, and to challenge the traditional prose form.
The study revealed the intersection of language and leadership in the life of a school administrator. It also showed how multimodal interaction analysis and dramaturgical metaphors can help educational leaders understand their own literate lives through new lenses and how they can grow from that understanding.
|
3 |
Planning and Sequencing Through Multimodal Interaction for Robot ProgrammingAkan, Batu January 2014 (has links)
Over the past few decades the use of industrial robots has increased the efficiency as well as the competitiveness of several sectors. Despite this fact, in many cases robot automation investments are considered to be technically challenging. In addition, for most small and medium-sized enterprises (SMEs) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods of industrial robots are too complex for most technicians or manufacturing engineers, and thus assistance from a robot programming expert is often needed. The hypothesis is that in order to make the use of industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis, a novel system for task-level programming is proposed. The user interacts with an industrial robot by giving instructions in a structured natural language and by selecting objects through an augmented reality interface. The proposed system consists of two parts: (i) a multimodal framework that provides a natural language interface for the user to interact in which the framework performs modality fusion and semantic analysis, (ii) a symbolic planner, POPStar, to create a time-efficient plan based on the user's instructions. The ultimate goal of this work in this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.This thesis mainly addresses two issues. The first issue is a general framework for designing and developing multimodal interfaces. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline. The framework also includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation. Such a framework helps us to make interaction with a robot easier and more natural. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high-level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than the programming issues of the robot. The second issue addressed is due to inherent characteristics of communication with the use of natural language; instructions given by a user are often vague and may require other actions to be taken before the conditions for applying the user's instructions are met. In order to solve this problem a symbolic planner, POPStar, based on a partial order planner (POP) is proposed. The system takes landmarks extracted from user instructions as input, and creates a sequence of actions to operate the robotic cell with minimal makespan. The proposed planner takes advantage of the partial order capabilities of POP to execute actions in parallel and employs a best-first search algorithm to seek the series of actions that lead to a minimal makespan. The proposed planner can also handle robots with multiple grippers, parallel machines as well as scheduling for multiple product types.
|
4 |
On Affective States in Computational Cognitive Practice through Visual and Musical ModalitiesTsoukalas, Kyriakos 29 June 2021 (has links)
Learners' affective states correlate with learning outcomes. A key aspect of instructional design is the choice of modalities by which learners interact with instructional content. The existing literature focuses on quantifying learning outcomes without quantifying learners' affective states during instructional activities. An investigation of how learners feel during instructional activities will inform the instructional systems design methodology of a method for quantifying the effects of individually available modalities on learners' affect.
The objective of this dissertation is to investigate the relationship between affective states and learning modalities of instructional computing. During an instructional activity, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in three distinct modalities. The modalities concentrate on visual and musical computing for the practice of computational thinking. An affective model for the practice of computational thinking through musical expression was developed and validated.
This dissertation begins with a literature review of relevant theories on embodied cognition, learning, and affective states. It continues with designing and fabricating a prototype instructional apparatus and its virtual simulation as a web service, both for the practice of computational thinking through musical expression, and concludes with a study investigating participants' affective states before and after four distinct online computing activities.
This dissertation builds on and contributes to extant literature by validating an affective model for computational thinking practice through self-expression. It also proposes a nomological network for the construct of computational thinking for future exploration of the construct, and develops a method for the assessment of instructional activities based on predefined levels of skill and knowledge. / Doctor of Philosophy / This dissertation investigates the role of learners' affect during instructional activities of visual and musical computing. More specifically, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in four distinct ways. The computing activities are based on a prototype instructional apparatus, which was designed and fabricated for the practice of computational thinking. A study was performed using a virtual simulation accessible via internet browser. The study suggests that maintaining enjoyment during instructional activities is a more direct path to academic motivation than excitement.
|
5 |
Everyday interaction in lesbian households : identity work, body behaviour, and actionViney, Rowena January 2015 (has links)
This thesis is about the resources that speakers can draw on when producing actions, both verbal and non-vocal. It considers how identity categories, gaze and touch can contribute to action in everyday interactions. The study stemmed from an interest in how lesbian identity is made relevant by lesbian speakers in everyday co-present interaction. A corpus of approximately 23.5 hours of video-recordings was gathered: households self-designated as lesbian (including couples, families, and housemates) video recorded some of their everyday interactions (including mealtimes, watching television, and playing board games). Using the tools of Conversation Analysis and working with the video recordings and transcripts of the interactions, several ways of making a lesbian identity relevant through talk were identified. As the analysis progressed, it was found that many references to sexual identity were produced fleetingly; they were not part of or integral to the ongoing talk, and were not taken up as a topic by participants. Rather, this invoking of a participant s sexual identity appears to contribute to a particular action that is being produced. It was found that invokings of other identities, for example relating to occupation, nationality, and race, worked in a similar way, and this is explored in relation to explanations and accounts. Where the first half of the thesis focuses on verbal invokings of identity in relation to action, the second half of the thesis considers some of the non-vocal resources that participants incorporate into their actions. It was found that when launching a topic related to something in the immediate environment, speakers can use gaze to ensure recipiency. Also, when producing potentially face-threatening actions such as teases, reprimands or insults, speakers can use interpersonal touch to mitigate the threat. In addition to showing how identities can be made relevant in everyday interaction, the findings of this thesis highlight the complexity of action design, and that in co-present interaction the physical resources available to participants also need to be taken into account.
|
6 |
Multimodal e-learning : an empirical studyFaneer, Musa Khalifa A. January 2015 (has links)
This empirical work aims to investigate the impact of using multimodal communication metaphors on e-learning systems’ usability, overall user experience and affective state. The study proposed a triple evaluation approach to avoid the problem of conventional assessment relying only on usability measurements of efficiency, effectiveness and user satisfactions. Usability in that sense refers only to the functionality and pragmatic side of the product and neglects other aspects of the system. Learning is a cognitive and repetitive task, requiring learners’ attention as well as their interest. Therefore, when delivering content, in addition to the pragmatic functionality, an e-learning system should provide a constructive overall user experience and positive affective state. Doing so will ensure user engagement, facilitate the learning process and increase learners’ performance. The impact of using five different communication metaphors was evaluated in three dimensions using the proposed approach. Within the usability dimension, the evaluation criteria involved measuring system efficiency, effectiveness, user satisfaction and learning performance. Within the user experience dimension, the evaluation criteria involved measuring pragmatic aspects of the user experience, the hedonic aspects of user experience in terms of stimulation as well as identification and the overall system attractiveness. Within the affective state dimension a self-assessments manikin technique was used in conjunction with biofeedback measurements, and users’ valence, arousal and dominance were measured. The study found that system attractiveness and the hedonic user experience had a profound impact on users’ learning performance and attitude toward the tested system. Furthermore, they influenced users’ views and judgement of the system and its usability. The communication metaphors were not equal in their influence within the evaluation criteria. Empirically derived guidelines were produced for the use and integration of these metaphors in e-learning systems. The outcome of the study highlights the need to use the triple evaluation approach in the assessment of e-learning interfaces prior to their release for better adoption and acceptance by end users.
|
7 |
A methodology for developing multimodal user interfaces of information systemsStanciulescu, Adrian 25 June 2008 (has links)
The Graphical User Interface (GUI), as the most prevailing type of User Interface (UI) in today’s interactive applications, restricts the interaction with a computer to the visual modality and is therefore not suited for some users (e.g., with limited literacy or typing skills), in some circumstances (e.g., while moving around, with their hands or eyes busy) or when the environment is constrained (e.g., the keyboard and the mouse are not available). In order to go beyond the GUI constraints, the Multimodal (MM) UIs apear as paradigm that provide users with great expressive power, naturalness and flexibility.
In this thesis we argue that developing MM UIs combining graphical and vocal modalities is an activity that could benefit from the application of a methodology which is composed of: a set of models, a method manipulating these models and the tools implementing the method. Therefore, we define a design space-based method that is supported by model-to-model colored transformations in order to obtain MM UIs of information systems. The design space is composed of explicitly defined design options that clarify the development process in a structured way in order to require less design effort. The feasability of the methodology is demonstrated through three case studies with different levels of complexity and coverage. In addition, an empirical study is conducted with end-users in order to measure the relative usability level provided by different design decisions.
|
8 |
Wearable Computers and Spatial CognitionKrum, David Michael 23 August 2004 (has links)
Human beings live and work in large and complex environments. It is often difficult for individuals to perceive and understand the structure of these environments. However, the formation of an accurate and reliable cognitive map, a mental model of the environment, is vital for optimal navigation and coordination. The need to develop a reliable cognitive map is common to the average individual as well as workers with more specialized tasks, for example, law enforcement or military personnel who must quickly learn to operate in a new area.
In this dissertation, I propose the use of a wearable computer as a platform for a spatial cognition aid. This spatial cognition aid uses terrain visualization software, GPS positioning, orientation sensors, and an eyeglass mounted display to provide an overview of the surrounding environment. While there are a number of similar mobile or wearable computer systems that function as tourist guides, navigation aids, and surveying tools, there are few examples of spatial cognition aids.
I present an architecture for the wearable computer based spatial cognition aid using a relationship mediation model for wearable computer applications. The relationship mediation model identifies and describes the user relationships in which a wearable computer can participate and mediate. The dissertation focuses on how the wearable computer mediates the users perception of the environment. Other components such as interaction techniques and a scalable system of servers for distributing spatial information are also discussed.
Several user studies were performed to determine an effective presentation for the spatial cognition aid. Participants were led through an outdoor environment while using different presentations on a wearable computer. The spatial learning of the participants was compared. These studies demonstrated that a wearable computer can be an effective spatial cognition aid. However, factors such as such as mental rotation, cognitive load, distraction, and divided attention must be taken into account when presenting spatial information to a wearable computer user.
|
9 |
Erweiterung einer Komponentenplattform zur Unterstützung multimodaler Anwendungen mit föderierten EndgerätenKadner, Kay 09 July 2008 (has links) (PDF)
Zur Erledigung einer Aufgabe kann der Benutzer mit verschiedenen Endgeräten interagieren, welche unterschiedliche Interaktionsarten (Modalitäten) anbieten. Dabei gibt es jedoch kein Endgerät, welches alle erdenkbaren Modalitäten unterstützt. Aus diesem Grund wird eine komponentenbasierte Integrationsschicht auf Basis einer Komponentenplattform entwickelt, die dem Nutzer die gewünschte Freiheit bei der Wahl der Endgeräte und somit der Modalitäten ermöglicht. Als Ausgangsbasis dafür dient das W3C Multimodal Interaction Framework. Mit Hilfe der Integrationsschicht kann der Nutzer beispielsweise Endgeräteföderationen erzeugen, die einzeln oder gemeinsam zur Interaktion verwendet werden können. Die Integrationsschicht besitzt verschiedene Konzepte, um z.B. Geschäftslogik zur Laufzeit zu verteilen, Komponentenausfälle zu behandeln und die auf verschiedene Endgeräte verteilte Nutzerschnittstelle zu synchronisieren. Die entwickelten Konzepte wurden prototypisch implementiert, validiert und auf Performanz untersucht.
|
10 |
Metrics to evaluate human teaching engagement from a robot's point of viewNovanda, Ori January 2017 (has links)
This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot's point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot's perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a 'good' teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher's activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement.
|
Page generated in 0.1691 seconds