Return to search

Putting a spin on SPINN : Representations of syntactic structure in neural network sentence encoders for natural language inference

This thesis presents and investigates a dependency-based recursive neural network model applied to the task of natural language inference. The dependency-based model is a direct extension of a previous constituency-based model used for natural language inference. The dependency-based model is tested on the Stanford Natural Language Inference corpus and is compared to the previously proposed constituency-based model as well as a recurrent Long-Short Term Memory network. The experiments show that the Long-Short Term Memory outperform both the dependency-based models as well as the constituency-based model. It is also shown that what is to be explicitly represented depends on the model dimensionality that one use. With 50-dimensional models, more explicit representations of the dependency structure provides higher accuracies, and the best dependency-based model performs on par with the LSTM. Higher model dimensionalities seem to favor less explicit representations of the dependency structure. We hypothesize that a smaller dimensionality requires a more explicit representation of the relevant linguistic features of the input, while the explicit representation becomes limiting when a higher model dimensionality is used.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-139229
Date January 2017
CreatorsJesper, Segeblad
PublisherLinköpings universitet, Interaktiva och kognitiva system
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0027 seconds