1 |
Parameter efficiency in Fine tuning Pretrained Large Language Models for Downstream TasksDorairaj, Jonathan January 2024 (has links)
This thesis investigates Parameter-Efficient Fine-Tuning (PEFT) methods, specifically Low-Rank Adaptation (LoRA) (Hu et al. 2021) and Adapters (Houlsby et al. 2019), using the General Language Understanding Evaluation (GLUE) dataset (Wang et al. 2019). The primary focus is to evaluate the effectiveness and efficiency of these methods in fine-tuning pre-trained language models. Additionally, we introduce a novel application by applying the methodology from Yang et al. 2024 to the adapter module weights. We utilize Laplace approximations over both the LoRA (Yang et al. 2024, Daxberger et al. 2022a) and the newly adapted Adapter weights, assessing the Expected Calibration Error (ECE) and Negative Log-Likelihood (NLL). Furthermore, we discuss practical considerations such as training time, memory usage, and storage space implications of these PEFT techniques. The findings provide valuable insights into the trade-offs and benefits of using LoRA and Adapters for fine-tuning in resource-constrained environments.
|
Page generated in 0.0234 seconds