Return to search

Generalization and Fairness Optimization in Pretrained Language Models

This study introduces an effective method to address the generalization challenge in pretrained language models (PLMs), which affects their performance on diverse linguistic data beyond their training scope. Improving PLMs' adaptability to out-of-distribution (OOD) data is essential for their reliability and practical utility in real-world applications. Furthermore, we address the ethical imperative of fairness in PLMs, particularly as they become integral to decision-making in sensitive societal sectors. We introduce gender-tuning, to identify and disrupt gender-related biases in training data. This method perturbs gendered terms, replacing them to break associations with other words. Gender-tuning stands as a practical, ethical intervention against gender bias in PLMs. Finally, we present FairAgent, a novel framework designed to imbue small language models (SLMs) with fairness, drawing on the knowledge of large language models (LLMs) without incurring the latter's computational costs. FairAgent operates by enabling SLMs to consult with LLMs, harnessing their vast knowledge to guide the generation of less biased content. This dynamic system not only detects bias in SLM responses but also generates prompts to correct it, accumulating effective prompts for future use. Over time, SLMs become increasingly adept at producing fair responses, enhancing both computational efficiency and fairness in AI-driven interactions.

Identiferoai:union.ndltd.org:unt.edu/info:ark/67531/metadc2332571
Date05 1900
CreatorsGhanbar Zadeh, Somayeh
ContributorsHuang, Yan, Buckles, Bill P., 1942-, Blanco, Eduardo, Yuan, Jing
PublisherUniversity of North Texas
Source SetsUniversity of North Texas
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
FormatText
RightsPublic, Ghanbar Zadeh, Somayeh, Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved.

Page generated in 0.0017 seconds