Sketches are commonly used in the early stages of design. Our previous system allows users to sketch mechanical systems that the computer interprets. However, some parts of the mechanical system might be too hard or too complicated to express in the sketch. Adding speech recognition to create a multimodal system would move us toward our goal of creating a more natural user interface. This thesis examines the relationship between the verbal and sketch input, particularly how to segment and align the two inputs. Toward this end, subjects were recorded while they sketched and talked. These recordings were transcribed, and a set of rules to perform segmentation and alignment was created. These rules represent the knowledge that the computer needs to perform segmentation and alignment. The rules successfully interpreted the 24 data sets that they were given.
Identifer | oai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/7103 |
Date | 01 February 2003 |
Creators | Adler, Aaron D. |
Source Sets | M.I.T. Theses and Dissertation |
Language | en_US |
Detected Language | English |
Format | 193 p., 34430522 bytes, 46149955 bytes, application/postscript, application/pdf |
Relation | AITR-2003-004 |
Page generated in 0.0021 seconds