<p dir="ltr">Deep learning models have demonstrated remarkable success across diverse domains, including computer vision and natural language processing. These models heavily rely on resources, encompassing annotated data, computational power, and storage. However, mobile devices, particularly in scenarios like medical or multilingual contexts, often face constraints with computing power, making ample data annotation prohibitively expensive. Developing deep learning models for such resource-constrained scenarios presents a formidable challenge. Our primary goal is to enhance the efficiency of state-of-the-art neural network models tailored for resource-limited scenarios. Our commitment lies in crafting algorithms that not only mitigate annotation requirements but also reduce computational complexity and alleviate storage demands. Our dissertation focuses on two key areas: Parameter-efficient Learning and Data-efficient Learning. In Part 1, we present our studies on parameter-efficient learning. This approach targets the creation of lightweight models for efficient storage or inference. The proposed solutions are tailored for diverse tasks, including text generation, text classification, and text/image retrieval. In Part 2, we showcase our proposed methods for data-efficient learning, concentrating on cross-lingual and multi-lingual text classification applications. </p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/26349415 |
Date | 22 July 2024 |
Creators | Haoyu Wang (19193416) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/Minimalism_Yields_Maximum_Results_Deep_Learning_with_Limited_Resource/26349415 |
Page generated in 0.0022 seconds