• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Regularized Fine-tuning Strategies for Neural Language Models : Application of entropy regularization on GPT-2

Hong, Jae Eun January 2022 (has links)
Deep neural language models like GPT-2 is undoubtedly strong at text generation, but often requires special decoding strategies to prevent producing degenerate output - namely repetition. The use of maximum likelihood training objective results in a peaked probability distribution, leading to the over-confidence of neural networks. In this thesis, we explore entropy regularization for a neural language model that can easily smooth peaked output distribution during the fine-tuning process employing GPT-2. We first define the models in three ways: (1) Out of-the box model without fine-tuning process, (2) Fine-tuned model without entropy regularization, and (3) Fine-tuned model with entropy regularization. To investigate the effect of domains on the model, we also divide the dataset into three ways: (1) fine-tuned on heterogeneous dataset, tested on heterogeneous dataset, (2) fine-tuned on homogeneous dataset, tested on homogeneous dataset, and (3) fine-tuned on heterogeneous dataset, tested on homogeneous dataset. In terms of entropy regularization, we experiment controlling the entropy strength parameter (𝛽) in the range of [0.5, 1.0, 2.0, 4.0, 6.0] and annealing the parameter during fine-tuning process. Our findings prove that the entropy-based regularization during fine-tuning process improve the text generation models by significantly reducing the repetition rate without tuning the decoding strategies. As a result of comparing the probabilities of human-generated sentence tokens, it was observed that entropy regularization compensates for the shortcomings of the deterministic decoding method (Beam search) that mostly selects few high-probability words. Various studies have explored entropy regularization in the cold-start training process of neural networks. However, there are not many studies covering the effect of the fine-tuning stage of text generation tasks when employing large scale pre-trained language models. Our findings present strong evidence that one can achieve significant improvement in text generation by way of utilizing entropy regularization, a highly cost-effective approach, during the fine-tuning process.
2

Probabilistic modelling of morphologically rich languages

Botha, Jan Abraham January 2014 (has links)
This thesis investigates how the sub-structure of words can be accounted for in probabilistic models of language. Such models play an important role in natural language processing tasks such as translation or speech recognition, but often rely on the simplistic assumption that words are opaque symbols. This assumption does not fit morphologically complex language well, where words can have rich internal structure and sub-word elements are shared across distinct word forms. Our approach is to encode basic notions of morphology into the assumptions of three different types of language models, with the intention that leveraging shared sub-word structure can improve model performance and help overcome data sparsity that arises from morphological processes. In the context of n-gram language modelling, we formulate a new Bayesian model that relies on the decomposition of compound words to attain better smoothing, and we develop a new distributed language model that learns vector representations of morphemes and leverages them to link together morphologically related words. In both cases, we show that accounting for word sub-structure improves the models' intrinsic performance and provides benefits when applied to other tasks, including machine translation. We then shift the focus beyond the modelling of word sequences and consider models that automatically learn what the sub-word elements of a given language are, given an unannotated list of words. We formulate a novel model that can learn discontiguous morphemes in addition to the more conventional contiguous morphemes that most previous models are limited to. This approach is demonstrated on Semitic languages, and we find that modelling discontiguous sub-word structures leads to improvements in the task of segmenting words into their contiguous morphemes.

Page generated in 0.0896 seconds