Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been created to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. As we attempt to design for more complex operations, the effectiveness of spatial manipulation as a metaphor becomes weak. We introduce two new platforms for multi-touch computing: a gesture recognition system, and a new interaction technique.
I present Multi-Tap Sliders, a new interaction technique for operation in what we call non-spatial parametric spaces. Such spaces do not have an obvious literal spatial representation, (Eg.: exposure, brightness, contrast and saturation for image editing). The multi-tap sliders encourage the user to keep her visual focus on the tar- get, instead of requiring her to look back at the interface. My research emphasizes ergonomics, clear visual design, and fluid transition between modes of operation. Through a series of iterations, I develop a new technique for quickly selecting and adjusting multiple numerical parameters. Evaluations of multi-tap sliders show improvements over traditional sliders.
To facilitate further research on multi-touch gestural interaction, I developed mGestr: a training and recognition system using hidden Markov models for designing a multi-touch gesture set. Our evaluation shows successful recognition rates of up to 95%. The recognition framework is packaged into a service for easy integration with existing applications.
Identifer | oai:union.ndltd.org:tamu.edu/oai:repository.tamu.edu:1969.1/151366 |
Date | 16 December 2013 |
Creators | Damaraju Sriranga, Sashikanth Raju |
Contributors | Hammond, Tracy A., Caverlee, James, Seo, Jinsil Hwaryoung, He, Weiling |
Source Sets | Texas A and M University |
Language | English |
Detected Language | English |
Type | Thesis, text |
Format | application/pdf |
Page generated in 0.0019 seconds