Return to search

Parameter efficiency in Fine tuning Pretrained Large Language Models for Downstream Tasks

This thesis investigates Parameter-Efficient Fine-Tuning (PEFT) methods, specifically Low-Rank Adaptation (LoRA) (Hu et al. 2021) and Adapters (Houlsby et al. 2019), using the General Language Understanding Evaluation (GLUE) dataset (Wang et al. 2019). The primary focus is to evaluate the effectiveness and efficiency of these methods in fine-tuning pre-trained language models. Additionally, we introduce a novel application by applying the methodology from Yang et al. 2024 to the adapter module weights. We utilize Laplace approximations over both the LoRA (Yang et al. 2024, Daxberger et al. 2022a) and the newly adapted Adapter weights, assessing the Expected Calibration Error (ECE) and Negative Log-Likelihood (NLL). Furthermore, we discuss practical considerations such as training time, memory usage, and storage space implications of these PEFT techniques. The findings provide valuable insights into the trade-offs and benefits of using LoRA and Adapters for fine-tuning in resource-constrained environments.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-205247
Date January 2024
CreatorsDorairaj, Jonathan
PublisherLinköpings universitet, Statistik och maskininlärning
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0016 seconds