<p dir="ltr">The research focuses on improving natural language processing (NLP) models by integrating the hierarchical structure of language, which is essential for understanding and generating human language. The main contributions of the study are:</p><ol><li><b>Hierarchical RNN Model:</b> Development of a deep Recurrent Neural Network model that captures both explicit and implicit hierarchical structures in language.</li><li><b>Hierarchical Attention Mechanism:</b> Use of a multi-level attention mechanism to help the model prioritize relevant information at different levels of the hierarchy.</li><li><b>Latent Indicators and Efficient Training:</b> Integration of latent indicators using the Expectation-Maximization algorithm and reduction of computational complexity with Bootstrap sampling and layered training strategies.</li><li><b>Sequence-to-Sequence Model for Translation:</b> Extension of the model to translation tasks, including a novel pre-training technique and a hierarchical decoding strategy to stabilize latent indicators during generation.</li></ol><p dir="ltr">The study claims enhanced performance in various NLP tasks with results comparable to larger models, with the added benefit of increased interpretability.</p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/24492517 |
Date | 03 November 2023 |
Creators | Zhaoxin Luo (17328937) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/Interpretable_natural_language_processing_models_with_deep_hierarchical_structures_and_effective_statistical_training/24492517 |
Page generated in 0.0019 seconds