Return to search

Segmentation and Alignment of Speech and Sketching in a Design Environment

Sketches are commonly used in the early stages of design. Our previous system allows users to sketch mechanical systems that the computer interprets. However, some parts of the mechanical system might be too hard or too complicated to express in the sketch. Adding speech recognition to create a multimodal system would move us toward our goal of creating a more natural user interface. This thesis examines the relationship between the verbal and sketch input, particularly how to segment and align the two inputs. Toward this end, subjects were recorded while they sketched and talked. These recordings were transcribed, and a set of rules to perform segmentation and alignment was created. These rules represent the knowledge that the computer needs to perform segmentation and alignment. The rules successfully interpreted the 24 data sets that they were given.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/7103
Date01 February 2003
CreatorsAdler, Aaron D.
Source SetsM.I.T. Theses and Dissertation
Languageen_US
Detected LanguageEnglish
Format193 p., 34430522 bytes, 46149955 bytes, application/postscript, application/pdf
RelationAITR-2003-004

Page generated in 0.0124 seconds