Return to search

Leveraging Large Language Models Trained on Code for Symbol Binding

While large language models like GPT-3 have achieved impressive results in the zero-, one-, and few-shot settings, they still significantly underperform on some tasks relative to the state of the art (SOTA). For many tasks it would be useful to have answer options explicitly listed out in a multiple choice format, decreasing computational cost and allowing the model to reason about the relative merits of possible answers. We argue that the reason this hasn't helped models like GPT-3 close the gap with the SOTA is that these models struggle with symbol binding - associating each answer option with a symbol that represents it. To ameliorate this situation we introduce index prompting, a way of leveraging language models trained on code to successfully answer multiple choice formatted questions. When used with the OpenAI Codex model, our method improves accuracy by about 18% on average in the few-shot setting relative to GPT-3 across 8 datasets representing 4 common NLP tasks. It also achieves a new single-model state of the art on ANLI R3, ARC (Easy), and StoryCloze, suggesting that GPT-3's latent "understanding" has been previously underestimated.

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-11127
Date09 August 2022
CreatorsRobinson, Joshua
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rightshttps://lib.byu.edu/about/copyright/

Page generated in 0.0017 seconds