Spelling suggestions: "subject:"softmax bottleneck"" "subject:"softmax ottleneck""
1 |
On the Softmax Bottleneck of Word-Level Recurrent Language ModelsParthiban, Dwarak Govind 06 November 2020 (has links)
For different input contexts (sequence of previous words), to predict the next word, a neural word-level language model outputs a probability distribution over all the words in the vocabulary using a softmax function. When the log of probability outputs for all such contexts are stacked together, the resulting matrix is a log probability matrix which can be denoted as Q_theta, where theta denotes the model parameters. When language modeling is formulated as a matrix factorization problem, the matrix to be factorized Q_theta is expected to be high-rank as natural language is highly context-dependent. But existing softmax based word-level language models have a limitation of not being able to produce such matrices; this is known as the softmax bottleneck.
There are several works that attempted to overcome the limitations introduced by softmax bottleneck, such as the models that can produce high-rank Q_theta. During the process of reproducing the results of these works, we observed that the rank of Q_theta does not always positively correlate with better performance (i.e., lower test perplexity). This puzzling observation triggered us to conduct a systematic investigation to check the influence of rank of Q_theta on better performance of a language model. We first introduce a new family of activation functions called the Generalized SigSoftmax (GSS). By controlling the parameters of GSS, we were able to construct language models that can produce Q_theta with diverse ranks (i.e., low, medium, and high ranks). For models that use GSS with different parameters, we observe that rank does not have a strong positive correlation with perplexity on the test data, reinforcing the support of our initial observation. By inspecting the top-5 predictions made by different models for a selected set of input contexts, we observe that a high-rank Q_theta does not guarantee a strong qualitative performance. Then, we conduct experiments to check if there are any other additional benefits in having models that can produce high-rank Q_theta. We expose that Q_theta rather suffers from the phenomenon of fast singular value decay. Additionally, we also propose an alternative metric to denote the rank of any matrix known as epsilon-effective rank, which can be useful to approximately quantify the singular value distribution when different values for epsilon are used.
We conclude by showing that it is the regularization which has played a positive role in the performance of these high-rank models in comparison to the chosen baselines, and there is no single model yet which truly gains improved expressiveness just because of breaking the softmax bottleneck.
|
Page generated in 0.0384 seconds