Return to search

Enriching models of natural language with auxiliary data

Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020 / Manuscript. / Includes bibliographical references (pages 81-89). / The highest-performing natural language processing models generally solve language tasks by deriving statistical regularities of sequences of arbitrary tokens supplied as training data. Humans have a much richer notion of language, however. For one thing, they understand that language refers to objects aid actions in the real world, which enables them to use language to efficiently transmit instructions on how to accomplish goals. For another, they learn to focus their attention on only those spans of text important for accomplishing the task at hand. ăIn this thesis, we attempt to improve machine models of language by taking inspiration from these aspects of human language. The first half of this thesis concerns understanding instructional "how-to" language, such as "Add remaining flour. Then mix." The meaning is ambiguous without context: Add how much flour to what? Mix what, using what tools, until when? We show how to successfully parse this language by maintaining a distribution over the state of a theoretical kitchen as the instructions are parsed. We also show how to aid interpretation if videos of the task are also available by training a joint vision-language model with over 300,000 Youtube videos on how to cook. The second half discusses taking advantage of people's ability to focus on important parts of a passage in a multiple-choice reading comprehension task to enhance the performance of an automatic question-answering system. We record the gaze location of hundreds of subjects as they read and answer questions about newspaper articles. We then train a state-of-the-art transformer model to predict human attention as well correct answers and find this leads to a substantial boost in performance over merely training the model to predicting correct answers. / by Jonathan Matthew Malmaud. / Ph. D. / Ph. D. Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/138515
Date January 2020
CreatorsMalmaud, Jonathan Matthew.
ContributorsMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences., Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
PublisherMassachusetts Institute of Technology
Source SetsM.I.T. Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Format89 pages, application/pdf
RightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided., http://dspace.mit.edu/handle/1721.1/7582

Page generated in 0.0086 seconds