Spelling suggestions: "subject:"intelligent used interfaces"" "subject:"intelligent use interfaces""
1 |
Rethinking Pen Input Interaction: Enabling Freehand Sketching Through Improved Primitive RecognitionPaulson, Brandon C. 2010 May 1900 (has links)
Online sketch recognition uses machine learning and artificial intelligence techniques
to interpret markings made by users via an electronic stylus or pen. The
goal of sketch recognition is to understand the intention and meaning of a particular
user's drawing. Diagramming applications have been the primary beneficiaries
of sketch recognition technology, as it is commonplace for the users of these tools to
rst create a rough sketch of a diagram on paper before translating it into a machine
understandable model, using computer-aided design tools, which can then be used to
perform simulations or other meaningful tasks.
Traditional methods for performing sketch recognition can be broken down into
three distinct categories: appearance-based, gesture-based, and geometric-based. Although
each approach has its advantages and disadvantages, geometric-based methods
have proven to be the most generalizable for multi-domain recognition. Tools, such as
the LADDER symbol description language, have shown to be capable of recognizing
sketches from over 30 different domains using generalizable, geometric techniques.
The LADDER system is limited, however, in the fact that it uses a low-level recognizer
that supports only a few primitive shapes, the building blocks for describing
higher-level symbols. Systems which support a larger number of primitive shapes have
been shown to have questionable accuracies as the number of primitives increase, or
they place constraints on how users must input shapes (e.g. circles can only be drawn
in a clockwise motion; rectangles must be drawn starting at the top-left corner).
This dissertation allows for a significant growth in the possibility of free-sketch
recognition systems, those which place little to no drawing constraints on users. In
this dissertation, we describe multiple techniques to recognize upwards of 18 primitive
shapes while maintaining high accuracy. We also provide methods for producing
confidence values and generating multiple interpretations, and explore the difficulties
of recognizing multi-stroke primitives. In addition, we show the need for a standardized
data repository for sketch recognition algorithm testing and propose SOUSA
(sketch-based online user study application), our online system for performing and
sharing user study sketch data. Finally, we will show how the principles we have
learned through our work extend to other domains, including activity recognition
using trained hand posture cues.
|
2 |
Adaptive Intelligent User Interfaces With Emotion RecognitionNasoz, Fatma 01 January 2004 (has links)
The focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect physiological data signals from participants experiencing specific emotions. Algorithms (k-Nearest Neighbor [KNN], Discriminant Function Analysis [DFA], Marquardt-Backpropagation [MBP], and Resilient Backpropagation [RBP]) were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. Emotion Elicitation with Movie Clips Experiment was conducted to elicit Sadness, Anger, Surprise, Fear, Frustration, and Amusement from participants. Overall, the three algorithms: KNN, DFA, and MBP, could recognize emotions with 72.3%, 75.0%, and 84.1% accuracy, respectively. Driving Simulator experiment was conducted to elicit driving-related emotions and states (panic/fear, frustration/anger, and boredom/sleepiness). The KNN, MBP and RBP Algorithms were used to classify the physiological signals by corresponding emotions. Overall, KNN could classify these three emotions with 66.3%, MBP could classify them with 76.7% and RBP could classify them with 91.9% accuracy. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users' negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the User Model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as: personality traits and preferences.
|
3 |
Intelligent Student Assessment And Coaching Interface To Web-based Education-oriented Intelligent Experimentation On Robot Supported Laboratory Set-upsMotuk, Halil Erdem 01 December 2003 (has links) (PDF)
This thesis presents a framework for an intelligent interface for the access of robotsupported
remote laboratories through the Internet. The framework is composed of
the student assessment and coaching system, the experimentation scenario, and the
associated graphical user interface. Student assessment and coaching system is the
main feature of a successful intelligent interface for use during remote
experimentation with a robot-supported laboratory setup. The system has a modular
structure employing artificial neural networks and a fuzzy-rule based decision
process to model the student behaviour, to evaluate the performance and to coach
him or her towards a better achievement of the tasks to be done during the
experimentation. With an experimentation scenario designed and a graphical user
interface, the system is applied to a robotic system that is connected to the Internet
for the evaluation of the proposed framework. Illustrative examples for the operation
of the each module in the system in the context of the application are given and
sensitivity analysis of the system to the change in parameters is also done. The
framework is then applied to a mobile robot control laboratory. The user interface
and the experimentation scenario is developed for the application, and necessary
modifications are made to the student assessment and coaching system in order to
support the experiment.
|
Page generated in 0.106 seconds