• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Giant Pigeon and Small Person: Prompting Visually Grounded Models about the Size of Objects

Yi Zhang (12438003) 22 April 2022 (has links)
<p>Empowering machines to understand our physical world should go beyond models with only natural language and models with only vision. Vision and language is a growing field of study that attempts to bridge the gap between natural language processing and computer vision communities by enabling models to learn visually grounded language. However, as an increasing number of pre-trained visual linguistic models focus on the alignment between visual regions and natural language, it is difficult to claim that these models capture certain properties of objects in their latent space, such as size. Inspired by recent trends in prompt learning, this study will design a prompt learning framework for two visual linguistic models, ViLBERT and ViLT, and use different manually crafted prompt templates to evaluate the consistency of performance of these models in comparing the size of objects. The results of this study showed that ViLT is more consistent in prediction accuracy for the given task with six pairs of objects under four prompt designs. However, the overall prediction accuracy is lower than the expectation on this object size comparison task; even the better model in this study, ViLT, has only 16 out of 24 cases better than the proposed random chance baseline. As this study is a preliminary study to explore the potential of pre-trained visual linguistic models on object size comparison, there are many directions for future work, such as investigating more models, choosing more object pairs, and trying different methods for feature engineering and prompt engineering.</p>
2

Prompt-learning and Zero-shot Text Classification with Domain-specific Textual Data

Luo, Hengyu January 2023 (has links)
The rapid growth of textual data in the digital age presents unique challenges in domain-specific text classification, particularly the scarcity of labeled data for many applications, due to expensive cost of manual labeling work. In this thesis, we explore the applicability of prompt-learning method, which is well-known for being suitable in few-shot scenarios and much less data-consuming, as an emerging alternative to traditional fine-tuning methods, for domain-specific text classification in the context of customer-agent interactions in the retail sector. Specifically, we implemented the entire prompt-learning pipeline for the classification task, and, our investigation encompasses various strategies of prompt-learning, including fixed-prompt language model tuning strategy and tuning-free prompting strategy, along with an examination of language model selection, few-shot sampling strategy, prompt template design, and verbalizer design. In this manner, we assessed the overall performance of the prompt-learning method in the classification task. Through a systematic evaluation, we demonstrate that with the fixed-prompt language model tuning strategy, based on relatively smaller language models (e.g. T5-base with around 220M parameters), prompt-learning can achieve competitive performance (close to 75% accuracy) even with limited labeled data (up to merely 15% of full data). And besides, with the tuning-free prompting strategy, based on a regular-size language model (e.g. FLAN-T5-large with around 770M parameters), the performance can be up to around 30% accuracy with detailed prompt templates and zero-shot setting (no extra training data involved). These results can offer valuable insights for researchers and practitioners working with domain-specific textual data, prompt-learning and few-shot / zero-shot learning. The findings of this thesis highlight the potential of prompt-learning as a practical solution for classification problems across diverse domains and set the stage for future research in this area.
3

Few-shot prompt learning for automating model completion

Ben-Chaaben, Meriem 08 1900 (has links)
Les modélisateurs rencontrent souvent des défis ou des difficultés lorsqu’il s’agit de concevoir un modèle logiciel particulier. Dans cette thèse, nous avons exploré différentes voies et examiné différentes approches pour résoudre cette problématique. Nous proposons enfin une approche simple mais novatrice qui améliore la complétion des activités de modélisation de domaines. Cette approche exploite la puissance des modèles de langage de grande taille en utilisant l’apprentissage par seulement quelques exemples, éliminant ainsi la nécessité d’un apprentissage profond ou d’un ajustement fin (fine tuning) sur des ensembles de données rares dans ce domaine. L’un des points forts de notre approche est sa polyvalence, car elle peut s’intégrer fa cilement à de nombreuses activités de modélisation, fournissant un aide précieux et des recommendations aux modélisateurs. De plus, nous avons mené une étude utilisateur pour évaluer l’utilité de cette méthode et la valeur de l’assistance en modélisation; nous avons cherché à savoir si l’effort investi dans l’assistance en modélisation vaut la peine en recueillant les commentaires des concepteurs de modèles logiciels. / Modelers often encounter challenges or difficulties when it comes to designing a particular software model. Throughout this thesis, we have explored various paths and examined different approaches to address this issue. We finally propose a simple yet novel approach enhancing completion in domain modeling activities. This approach leverages the power of large language models by utilizing few-shot prompt learning, eliminating the need for extensive training or fine-tuning on scarce datasets in this field. One of the notable strengths of our approach lies in its versatility, as it can be seamlessly integrated into various modeling activities, providing valuable support and recommendations to software modelers. Additionally, we conducted a user study to evaluate the usefulness of this approach and determine the value of providing assistance in modeling; we aimed to determine if the effort invested in modeling assistance is worthwhile by gathering feedback from software modelers.

Page generated in 0.0732 seconds