Return to search

Response Generation Using Large-scale Pre-trained Language Models

In this project I studied how generative neural language models can be used for response generation. The purpose of the model is to generate responses for a social robot, instead of having responses be authored and evaluated by crowd-sourced workers. To achieve this task, I train a large-scale pre-trained neural language model on the collected data. I trained six model variations to study the changes in utterance quality, the models vary in the amount of pre-training they have. I also test three different decoding methods for the same purpose. One of the model variations utilize multi-task learning during training, where the model performs other tasks alongside response generation. The utterances produced by the models were evaluated through crowd-sourced human evaluation. Utterances were shown by the evaluation to be of roughly equal quality to the original utterances it was trained to replicate. The results show that a large-scale language model may be a viable alternative to crowd-sourced authoring and evaluation of utterances, reducing costs and providing more reliable results.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-415323
Date January 2020
CreatorsNyberg, Jakob
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUPTEC IT, 1401-5749 ; 20027

Page generated in 0.0017 seconds