Spelling suggestions: "subject:"comain generalization"" "subject:"cdomain generalization""
1 |
Domain-Aware Continual Zero-Shot LearningYi, Kai 29 November 2021 (has links)
We introduce Domain Aware Continual Zero-Shot Learning (DACZSL), the task of visually recognizing images of unseen categories in unseen domains sequentially. We created DACZSL on top of the DomainNet dataset by dividing it into a sequence of tasks, where classes are incrementally provided on seen domains during training and evaluation is conducted on unseen domains for both seen and unseen classes. We also proposed a novel Domain-Invariant CZSL Network (DIN), which outperforms state-of-the-art baseline models that we adapted to DACZSL setting. We adopt a structure-based approach to alleviate forgetting knowledge from previous tasks with a small per-task private network in addition to a global shared network. To encourage the private network to capture the domain and task-specific representation, we train our model with a novel adversarial knowledge disentanglement setting to make our global network task-invariant and domain-invariant over all the tasks. Our method also learns a class-wise learnable prompt to obtain better class-level text representation, which is used to represent side information to enable zero-shot prediction of future unseen classes. Our code and benchmarks are made available at https://zero-shot-learning.github.io/daczsl.
|
2 |
A Transfer Learning Methodology of Domain Generalization for Prognostics and Health ManagementYang, Qibo January 2020 (has links)
No description available.
|
3 |
Leveraging Synthetic Images with Domain-Adversarial Neural Networks for Fine-Grained Car Model ClassificationSmith, Dayyan January 2021 (has links)
Supervised learning methods require vast amounts of annotated images to successfully train an image classifier. Acquiring the necessary annotated images is costly. The increased availability of photorealistic computer generated images that are annotated automatically begs the question under which conditions it is possible to leverage this synthetic data during training. We investigate the conditions that make it possible to leverage computer generated renders of car models for fine-grained car model classification. / Övervakade inlärningsmetoder kräver stora mängder kommenterade bilder för att framgångsrikt träna en bildklassificator. Det är kostsamt att skaffa de nödvändiga bilderna med kommentarer. Den ökade tillgången till fotorealistiska datorgenererade bilder som kommenteras automatiskt väcker frågan om under vilka förhållanden det är möjligt att utnyttja dessa syntetiska data vid träning. Vi undersöker vilka villkor som gör det möjligt att utnyttja datorgenererade renderingar av bilmodeller för finkornig klassificering av bilmodeller.
|
4 |
Label-Efficient Visual Understanding with Consistency ConstraintsZou, Yuliang 24 May 2022 (has links)
Modern deep neural networks are proficient at solving various visual recognition and understanding tasks, as long as a sufficiently large labeled dataset is available during the training time. However, the progress of these visual tasks is limited by the number of manual annotations. On the other hand, it is usually time-consuming and error-prone to annotate visual data, rendering the challenge of scaling up human labeling for many visual tasks. Fortunately, it is easy to collect large-scale, diverse unlabeled visual data from the Internet. And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how to utilize the unlabeled data and synthetic labeled data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea is to encourage deep neural networks to produce consistent predictions across different transformations (\eg geometry, temporal, photometric, etc.).
We organize the dissertation as follows. In Part I, we propose to use the consistency over different geometric formulations and a cycle consistency over time to tackle the low-level scene geometry perception tasks in a self-supervised learning setting. In Part II, we tackle the high-level semantic understanding tasks in a semi-supervised learning setting, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains with one single forward pass, without model training or optimization at the inference time. / Doctor of Philosophy / Recently, deep learning has emerged as one of the most powerful tools to solve various visual understanding tasks. However, the development of deep learning methods is significantly limited by the amount of manually labeled data. On the other hand, it is usually time-consuming and error-prone to annotate visual data, making the human labeling process not easily scalable. Fortunately, it is easy to collect large-scale, diverse raw visual data from the Internet (\eg search engines, YouTube, Instagram, etc.). And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how we can utilize the raw visual data and synthetic data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea behind this is to encourage deep neural networks to produce consistent predictions of the same visual input across different transformations (\eg geometry, temporal, photometric, etc.).
We organize the dissertation as follows. In Part I, we propose using the consistency over different geometric formulations and a forward-backward cycle consistency over time to tackle the low-level scene geometry perception tasks, using unlabeled visual data only. In Part II, we tackle the high-level semantic understanding tasks using both a small amount of labeled data and a large amount of unlabeled data jointly, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains.
|
5 |
Handling Domain Shift in 3D Point Cloud PerceptionSaltori, Cristiano 10 April 2024 (has links)
This thesis addresses the problem of domain shift in 3D point cloud perception. In the last decades, there has been tremendous progress in within-domain training and testing. However, the performance of perception models is affected when training on a source domain and testing on a target domain sampled from different data distributions. As a result, a change in sensor or geo-location can lead to a harmful drop in model performance. While solutions exist for image perception, addressing this problem in point clouds remains unresolved. The focus of this thesis is the study and design of solutions for mitigating domain shift in 3D point cloud perception. We identify several settings differing in the level of target supervision and the availability of source data. We conduct a thorough study of each setting and introduce a new method to solve domain shift in each configuration. In particular, we study three novel settings in domain adaptation and domain generalization and propose five new methods for mitigating domain shift in 3D point cloud perception. Our methods are used by the research community, and at the time of writing, some of the proposed approaches hold the state-of-the-art. In conclusion, this thesis provides a valuable contribution to the computer vision community, setting the groundwork for the development of future works in cross-domain conditions.
|
6 |
Learning From Data Across Domains: Enhancing Human and Machine Understanding of Data From the WildSean Michael Kulinski (17593182) 13 December 2023 (has links)
<p dir="ltr">Data is collected everywhere in our world; however, it often is noisy and incomplete. Different sources of data may have different characteristics, quality levels, or come from dynamic and diverse environments. This poses challenges for both humans who want to gain insights from data and machines which are learning patterns from data. How can we leverage the diversity of data across domains to enhance our understanding and decision-making? In this thesis, we address this question by proposing novel methods and applications that use multiple domains as more holistic sources of information for both human and machine learning tasks. For example, to help human operators understand environmental dynamics, we show the detection and localization of distribution shifts to problematic features, as well as how interpretable distributional mappings can be used to explain the differences between shifted distributions. For robustifying machine learning, we propose a causal-inspired method to find latent factors that are robust to environmental changes and can be used for counterfactual generation or domain-independent training; we propose a domain generalization framework that allows for fast and scalable models that are robust to distribution shift; and we introduce a new dataset based on human matches in StarCraft II that exhibits complex and shifting multi-agent behaviors. We showcase our methods across various domains such as healthcare, natural language processing (NLP), computer vision (CV), etc. to demonstrate that learning from data across domains can lead to more faithful representations of data and its generating environments for both humans and machines.</p>
|
7 |
Toward trustworthy deep learning : out-of-distribution generalization and few-shot learningGagnon-Audet, Jean-Christophe 04 1900 (has links)
L'intelligence artificielle est un domaine en pleine évolution. Au premier plan des percées récentes se retrouve des approches connues sous le nom d'apprentissage automatique. Cependant, bien que l'apprentissage automatique ait montré des performances remarquables dans des tâches telles que la reconnaissance et la génération d'images, la génération et la traduction de textes et le traitement de la parole, il est connu pour échouer silencieusement dans des conditions courantes. Cela est dû au fait que les algorithmes modernes héritent des biais des données utilisées pour les créer, ce qui conduit à des prédictions incorrectes lorsqu'ils rencontrent de nouvelles données différentes des données d'entraînement. Ce problème est connu sous le nom de défaillance hors-distribution. Cela rend l'intelligence artificielle moderne peu fiable et constitue un obstacle important à son déploiement sécuritaire et généralisé.
Ignorer l'échec de généralisation hors-distribution de l'apprentissage automatique pourrait entraîner des situations mettant des vies en danger. Cette thèse vise à aborder cette question et propose des solutions pour assurer le déploiement sûr et fiable de modèles d'intelligence artificielle modernes.
Nous présentons trois articles qui couvrent différentes directions pour résoudre l'échec de généralisation hors-distribution de l'apprentissage automatique. Le premier article propose une approche directe qui démontre une performance améliorée par rapport à l'état de l'art. Le deuxième article établie les bases de recherches futures en généralisation hors distribution dans les séries temporelles, tandis que le troisième article fournit une solution simple pour corriger les échecs de généralisation des grands modèles pré-entraînés lorsqu'entraîné sur tes tâches en aval. Ces articles apportent des contributions précieuses au domaine et fournissent des pistes prometteuses pour la recherche future en généralisation hors distribution. / Artificial Intelligence (AI) is a rapidly advancing field, with data-driven approaches known as machine learning, at the forefront of many recent breakthroughs. However, while machine learning have shown remarkable performance in tasks such as image recognition and generation, text generation and translation, and speech processing, they are known to silently fail under common conditions. This is because modern AI algorithms inherit biases from the data used to train them, leading to incorrect predictions when encountering new data that is different from the training data. This problem is known as distribution shift or out-of-distribution (OOD) failure. This causes modern AI to be untrustworthy and is a significant barrier to the safe widespread deployment of AI.
Failing to address the OOD generalization failure of machine learning could result in situations that put lives in danger or make it impossible to deploy AI in any significant manner. This thesis aims to tackle this issue and proposes solutions to ensure the safe and reliable deployment of modern deep learning models.
We present three papers that cover different directions in solving the OOD generalization failure of machine learning. The first paper proposes a direct approach that demonstrates improved performance over the state-of-the-art. The second paper lays the groundwork for future research in OOD generalization in time series, while the third paper provides a straightforward solution for fixing generalization failures of large pretrained models when finetuned on downstream tasks. These papers make valuable contributions to the field and provide promising avenues for future research in OOD generalization.
|
Page generated in 0.1421 seconds