• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 57
  • 16
  • 11
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 270
  • 270
  • 243
  • 102
  • 73
  • 62
  • 60
  • 50
  • 40
  • 36
  • 31
  • 30
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Relações entre ranking, análise ROC e calibração em aprendizado de máquina / Relations among rankings, ROC analysis and calibration applied to machine learning

Edson Takashi Matsubara 21 October 2008 (has links)
Aprendizado supervisionado tem sido principalmente utilizado para classificação. Neste trabalho são mostrados os benefícios do uso de rankings ao invés de classificação de exemplos isolados. Um rankeador é um algoritmo que ordena um conjunto de exemplos de tal modo que eles são apresentados do exemplo de maior para o exemplo de menor expectativa de ser positivo. Um ranking é o resultado dessa ordenação. Normalmente, um ranking é obtido pela ordenação do valor de confiança de classificação dado por um classificador. Este trabalho tem como objetivo procurar por novas abordagens para promover o uso de rankings. Desse modo, inicialmente são apresentados as diferenças e semelhanças entre ranking e classificação, bem como um novo algoritmo de ranking que os obtém diretamente sem a necessidade de obter os valores de confiança de classificação, esse algoritmo é denominado de LEXRANK. Uma área de pesquisa bastante importante em rankings é a análise ROC. O estudo de árvores de decisão e análise ROC é bastante sugestivo para o desenvolvimento de uma visualização da construção da árvore em gráficos ROC. Para mostrar passo a passo essa visualização foi desenvolvido uma sistema denominado PROGROC. Ainda do estudo de análise ROC, foi observado que a inclinação (coeficiente angular) dos segmentos que compõem o fecho convexo de curvas ROC é equivalente a razão de verossimilhança que pode ser convertida para probabilidades. Essa conversão é denominada de calibração por fecho convexo de curvas ROC que coincidentemente é equivalente ao algoritmo PAV que implementa regressão isotônica. Esse método de calibração otimiza Brier Score. Ao explorar essa medida foi encontrada uma relação bastante interessante entre Brier Score e curvas ROC. Finalmente, também foram explorados os rankings construídos durante o método de seleção de exemplos do algoritmo de aprendizado semi-supervisionado multi-descrição CO-TRAINING / Supervised learning has been used mostly for classification. In this work we show the benefits of a welcome shift in attention from classification to ranking. A ranker is an algorithm that sorts a set of instances from highest to lowest expectation that the instance is positive, and a ranking is the outcome of this sorting. Usually a ranking is obtained by sorting scores given by classifiers. In this work, we are concerned about novel approaches to promote the use of ranking. Therefore, we present the differences and relations between ranking and classification followed by a proposal of a novel ranking algorithm called LEXRANK, whose rankings are derived not from scores, but from a simple ranking of attribute values obtained from the training data. One very important field which uses rankings as its main input is ROC analysis. The study of decision trees and ROC analysis suggested an interesting way to visualize the tree construction in ROC graphs, which has been implemented in a system called PROGROC. Focusing on ROC analysis, we observed that the slope of segments obtained from the ROC convex hull is equivalent to the likelihood ratio, which can be converted into probabilities. Interestingly, this ROC convex hull calibration method is equivalent to Pool Adjacent Violators (PAV). Furthermore, the ROC convex hull calibration method optimizes Brier Score, and the exploration of this measure leads us to find an interesting connection between the Brier Score and ROC Curves. Finally, we also investigate rankings build in the selection method which increments the labelled set of CO-TRAINING, a semi-supervised multi-view learning algorithm
182

Complex network component unfolding using a particle competition technique / Desdobramento de componentes de redes complexas utilizando uma técnica de competição de partículas

Paulo Roberto Urio 12 June 2017 (has links)
This work applies complex network theory to the problem of semi-supervised and unsupervised learning in networks that are representations of multivariate datasets. Complex networks allow the use of nonlinear dynamical systems to represent behaviors according to the connectivity patterns of networks. Inspired by behavior observed in nature, such as competition for limited resources, dynamical system models can be employed to uncover the organizational structure of a network. In this dissertation, we develop a technique for classifying data represented as interaction networks. As part of the technique, we model a dynamical system inspired by the biological dynamics of resource competition. So far, similar methods have focused on vertices as the resource of competition. We introduce edges as the resource of competition. In doing so, the connectivity pattern of a network might be used not only in the dynamical system simulation but in the learning task as well. / Este trabalho aplica a teoria de redes complexas para o estudo de uma técnica aplicada ao problema de aprendizado semissupervisionado e não-supervisionado em redes, especificamente, aquelas que representam conjuntos de dados multivariados. Redes complexas permitem o emprego de sistemas dinâmicos não-lineares que podem apresentar comportamentos de acordo com os padrões de conectividade de redes. Inspirado pelos comportamentos observados na natureza, tais como a competição por recursos limitados, sistema dinâmicos podem ser utilizados para revelar a estrutura da organização de uma rede. Nesta dissertação, desenvolve-se uma técnica aplicada ao problema de classificação de dados representados por redes de interação. Como parte da técnica, um sistema dinâmico inspirado na competição por recursos foi modelado. Métodos similares concentraram-se em vértices como o recurso da concorrência. Neste trabalho, introduziu-se arestas como o recurso-alvo da competição. Ao fazê-lo, utilizar-se-á o padrão de conectividade de uma rede tanto na simulação do sistema dinâmico, quanto na tarefa de aprendizado.
183

Análise de sentimentos em textos curtos provenientes de redes sociais / Sentiment analysis in short texts from social networks

Nadia Felix Felipe da Silva 22 February 2016 (has links)
A análise de sentimentos é um campo de estudo com recente popularização devido ao crescimento da Internet e do conteúdo que é gerado por seus usuários, principalmente nas redes sociais, nas quais as pessoas publicam suas opiniões em uma linguagem coloquial e em muitos casos utilizando de artifícios gráficos para tornar ainda mais sucintos seus diálogos. Esse cenário é observado no Twitter, uma ferramenta de comunicação que pode facilmente ser usada como fonte de informação para várias ferramentas automáticas de inferência de sentimentos. Esforços de pesquisas têm sido direcionados para tratar o problema de análise de sentimentos em redes sociais sob o ponto de vista de um problema de classificação, com pouco consenso sobre qual é o classificador com melhor poder preditivo, bem como qual é a configuração fornecida pela engenharia de atributos que melhor representa os textos. Outro problema é que em um cenário supervisionado, para a etapa de treinamento do modelo de classificação, é imprescindível se dispor de exemplos rotulados, uma tarefa árdua e que demanda esforço humano em grande parte das aplicações. Esta tese tem por objetivo investigar o uso de agregadores de classificadores (classifier ensembles), explorando a diversidade e a potencialidade de várias abordagens supervisionadas quando estas atuam em conjunto, além de um estudo detalhado da fase que antecede a escolha do classificador, a qual é conhecida como engenharia de atributos. Além destes aspectos, um estudo mostrando que o aprendizado não supervisionado pode fornecer restrições complementares úteis para melhorar a capacidade de generalização de classificadores de sentimento é realizado, fornecendo evidências de que ganhos já observados em outras áreas do conhecimento também podem ser obtidos no domínio em questão. A partir dos promissores resultados experimentais obtidos no cenário de aprendizado supervisionado, alavancados pelo uso de técnicas não supervisionadas, um algoritmo existente, denominado de C3E (Consensus between Classification and Clustering Ensembles) foi adaptado e estendido para o cenário semissupervisionado. Este algoritmo refina a classificação de sentimentos a partir de informações adicionais providas pelo agrupamento em um procedimento de autotreinamento (self-training). Tal abordagem apresenta resultados promissores e competitivos com abordagens que representam o estado da arte em outros domínios. / Sentiment analysis is a field of study that shows recent popularization due to the growth of Internet and the content that is generated by its users. More recently, social networks have emerged, where people post their opinions in colloquial and compact language. This is what happens in Twitter, a communication tool that can easily be used as a source of information for various automatic tools of sentiment inference. Research efforts have been directed to deal with the problem of sentiment analysis in social networks from the point of view of a classification problem, where there is no consensus about what is the best classifier, and what is the best configuration provided by the feature engineering process. Another problem is that in a supervised setting, for the training stage of the classification model, we need labeled examples, which are hard to get in the most of applications. The objective of this thesis is to investigate the use of classifier ensembles, exploring the diversity and the potential of various supervised approaches when these work together, as well as to provide a study about the phase that precedes the choice of the classifier, which is known as feature engineering. In addition to these aspects, a study showing that unsupervised learning techniques can provide useful and additional constraints to improve the ability of generalization of the classifiers is also carried out. Based on the promising results got in supervised learning settings, an existing algorithm called C3E (Consensus between Classification and Clustering Ensembles) was adapted and extended for the semi-supervised setting. This algorithm refines the sentiment classification from additional information provided by clusters of data, in a self-training procedure. This approach shows promising results when compared with state of the art algorithms.
184

Model independent searches for New Physics using Machine Learning at the ATLAS experiment / Recherche de Nouvelle Physique indépendante d'un modèle en utilisant l’apprentissage automatique sur l’experience ATLAS

Jimenez, Fabricio 16 September 2019 (has links)
Nous abordons le problème de la recherche indépendante du modèle pour la Nouvelle Physique (NP), au Grand Collisionneur de Hadrons (LHC) en utilisant le détecteur ATLAS. Une attention particulière est accordée au développement et à la mise à l'essai de nouvelles techniques d'apprentissage automatique à cette fin. Le présent ouvrage présente trois résultats principaux. Tout d'abord, nous avons mis en place un système de surveillance automatique des signatures génériques au sein de TADA, un outil logiciel d'ATLAS. Nous avons exploré plus de 30 signatures au cours de la période de collecte des données de 2017 et aucune anomalie particulière n'a été observée par rapport aux simulations des processus du modèle standard. Deuxièmement, nous proposons une méthode collective de détection des anomalies pour les recherches de NP indépendantes du modèle au LHC. Nous proposons l'approche paramétrique qui utilise un algorithme d'apprentissage semi-supervisé. Cette approche utilise une probabilité pénalisée et est capable d'effectuer simultanément une sélection appropriée des variables et de détecter un comportement anormal collectif possible dans les données par rapport à un échantillon de fond donné. Troisièmement, nous présentons des études préliminaires sur la modélisation du bruit de fond et la détection de signaux génériques dans des spectres de masse invariants à l'aide de processus gaussiens (GPs) sans information préalable moyenne. Deux méthodes ont été testées dans deux ensembles de données : une procédure en deux étapes dans un ensemble de données tiré des simulations du modèle standard utilisé pour ATLAS General Search, dans le canal contenant deux jets à l'état final, et une procédure en trois étapes dans un ensemble de données simulées pour le signal (Z′) et le fond (modèle standard) dans la recherche de résonances dans le cas du spectre de masse invariant de paire supérieure. Notre étude est une première étape vers une méthode qui utilise les GPs comme outil de modélisation qui peut être appliqué à plusieurs signatures dans une configuration plus indépendante du modèle. / We address the problem of model-independent searches for New Physics (NP), at the Large Hadron Collider (LHC) using the ATLAS detector. Particular attention is paid to the development and testing of novel Machine Learning techniques for that purpose. The present work presents three main results. Firstly, we put in place a system for automatic generic signature monitoring within TADA, a software tool from ATLAS. We explored over 30 signatures in the data taking period of 2017 and no particular discrepancy was observed with respect to the Standard Model processes simulations. Secondly, we propose a collective anomaly detection method for model-independent searches for NP at the LHC. We propose the parametric approach that uses a semi-supervised learning algorithm. This approach uses penalized likelihood and is able to simultaneously perform appropriate variable selection and detect possible collective anomalous behavior in data with respect to a given background sample. Thirdly, we present preliminary studies on modeling background and detecting generic signals in invariant mass spectra using Gaussian processes (GPs) with no mean prior information. Two methods were tested in two datasets: a two-step procedure in a dataset taken from Standard Model simulations used for ATLAS General Search, in the channel containing two jets in the final state, and a three-step procedure from a simulated dataset for signal (Z′) and background (Standard Model) in the search for resonances in the top pair invariant mass spectrum case. Our study is a first step towards a method that takes advantage of GPs as a modeling tool that can be applied to several signatures in a more model independent setup.
185

Méthodes des matrices aléatoires pour l’apprentissage en grandes dimensions / Methods of random matrices for large dimensional statistical learning

Mai, Xiaoyi 16 October 2019 (has links)
Le défi du BigData entraîne un besoin pour les algorithmes d'apprentissage automatisé de s'adapter aux données de grande dimension et de devenir plus efficace. Récemment, une nouvelle direction de recherche est apparue qui consiste à analyser les méthodes d’apprentissage dans le régime moderne où le nombre n et la dimension p des données sont grands et du même ordre. Par rapport au régime conventionnel où n>>p, le régime avec n,p sont grands et comparables est particulièrement intéressant, car les performances d’apprentissage dans ce régime restent sensibles à l’ajustement des hyperparamètres, ouvrant ainsi une voie à la compréhension et à l’amélioration des techniques d’apprentissage pour ces données de grande dimension.L'approche technique de cette thèse s'appuie sur des outils avancés de statistiques de grande dimension, nous permettant de mener des analyses allant au-delà de l'état de l’art. La première partie de la thèse est consacrée à l'étude de l'apprentissage semi-supervisé sur des grandes données. Motivés par nos résultats théoriques, nous proposons une alternative supérieure à la méthode semi-supervisée de régularisation laplacienne. Les méthodes avec solutions implicites, comme les SVMs et la régression logistique, sont ensuite étudiées sous des modèles de mélanges réalistes, fournissant des détails exhaustifs sur le mécanisme d'apprentissage. Plusieurs conséquences importantes sont ainsi révélées, dont certaines sont même en contradiction avec la croyance commune. / The BigData challenge induces a need for machine learning algorithms to evolve towards large dimensional and more efficient learning engines. Recently, a new direction of research has emerged that consists in analyzing learning methods in the modern regime where the number n and the dimension p of data samples are commensurately large. Compared to the conventional regime where n>>p, the regime with large and comparable n,p is particularly interesting as the learning performance in this regime remains sensitive to the tuning of hyperparameters, thus opening a path into the understanding and improvement of learning techniques for large dimensional datasets.The technical approach employed in this thesis draws on several advanced tools of high dimensional statistics, allowing us to conduct more elaborate analyses beyond the state of the art. The first part of this dissertation is devoted to the study of semi-supervised learning on high dimensional data. Motivated by our theoretical findings, we propose a superior alternative to the standard semi-supervised method of Laplacian regularization. The methods involving implicit optimizations, such as SVMs and logistic regression, are next investigated under realistic mixture models, providing exhaustive details on the learning mechanism. Several important consequences are thus revealed, some of which are even in contradiction with common belief.
186

Apprentissage et noyau pour les interfaces cerveau-machine / Study of kernel machines towards brain-computer interfaces

Tian, Xilan 07 May 2012 (has links)
Les Interfaces Cerveau-Machine (ICM) ont été appliquées avec succès aussi bien dans le domaine clinique que pour l'amélioration de la vie quotidienne de patients avec des handicaps. En tant que composante essentielle, le module de traitement du signal détermine nettement la performance d'un système ICM. Nous nous consacrons à améliorer les stratégies de traitement du signal du point de vue de l'apprentissage de la machine. Tout d'abord, nous avons développé un algorithme basé sur les SVM transductifs couplés aux noyaux multiples afin d'intégrer différentes vues des données (vue statistique ou vue géométrique) dans le processus d'apprentissage. Deuxièmement, nous avons proposé une version enligne de l'apprentissage multi-noyaux dans le cas supervisé. Les résultats expérimentaux montrent de meilleures performances par rapport aux approches classiques. De plus, l'algorithme proposé permet de sélectionner automatiquement les canaux de signaux EEG utiles grâce à l'apprentissage multi-noyaux.Dans la dernière partie, nous nous sommes attaqués à l'amélioration du module de traitement du signal au-delà des algorithmes d'apprentissage automatique eux-mêmes. En analysant les données ICM hors-ligne, nous avons d'abord confirmé qu'un modèle de classification simple peut également obtenir des performances satisfaisantes en effectuant une sélection de caractéristiques (et/ou de canaux). Nous avons ensuite conçu un système émotionnel ICM par en tenant compte de l'état émotionnel de l'utilisateur. Sur la base des données de l'EEG obtenus avec différents états émotionnels, c'est-à -dire, positives, négatives et neutres émotions, nous avons finalement prouvé que l'émotion affectait les performances ICM en utilisant des tests statistiques. Cette partie de la thèse propose des bases pour réaliser des ICM plus adaptées aux utilisateurs. / Brain-computer Interface (BCI) has achieved numerous successful applications in both clinicaldomain and daily life amelioration. As an essential component, signal processing determines markedly the performance of a BCI system. In this thesis, we dedicate to improve the signal processing strategy from perspective of machine learning strategy. Firstly, we proposed TSVM-MKL to explore the inputs from multiple views, namely, from statistical view and geometrical view; Secondly, we proposed an online MKL to reduce the computational burden involved in most MKL algorithm. The proposed algorithms achieve a better classifcation performance compared with the classical signal kernel machines, and realize an automatical channel selection due to the advantages of MKL algorithm. In the last part, we attempt to improve the signal processing beyond the machine learning algorithms themselves. We first confirmed that simple classifier model can also achieve satisfying performance by careful feature (and/or channel) selection in off-line BCI data analysis. We then implement another approach to improve the BCI signal processing by taking account for the user's emotional state during the signal acquisition procedure. Based on the reliable EEG data obtained from different emotional states, namely, positive, negative and neutral emotions, we perform strict evaluation using statistical tests to confirm that the emotion does affect BCI performance. This part of work provides important basis for realizing user-friendly BCIs.
187

Semisupervizované hluboké učení v označování sekvencí / Semi-supervised deep learning in sequence labeling

Páll, Juraj Eduard January 2019 (has links)
Sequence labeling is a type of machine learning problem that involves as- signing a label to each sequence member. Deep learning has shown good per- formance for this problem. However, one disadvantage of this approach is its requirement of having a large amount of labeled data. Semi-supervised learning mitigates this problem by using cheaper unlabeled data together with labeled data. Currently, usage of semi-supervised deep learning for sequence labeling is limited. Therefore, the focus of this thesis is on the application of semi-super- vised deep learning in sequence labeling. Existing semi-supervised deep learning approaches are examined, and approaches for sequence labeling are proposed. The proposed approaches were implemented and experimentally evaluated on named-entity recognition and part-of-speech tagging tasks.
188

Plug-in methods in classification / Méthodes de type plug-in en classification

Chzhen, Evgenii 25 September 2019 (has links)
Ce manuscrit étudie plusieurs problèmes de classification sous contraintes. Dans ce cadre de classification, notre objectif est de construire un algorithme qui a des performances aussi bonnes que la meilleure règle de classification ayant une propriété souhaitée. Fait intéressant, les méthodes de classification de type plug-in sont bien appropriées à cet effet. De plus, il est montré que, dans plusieurs configurations, ces règles de classification peuvent exploiter des données non étiquetées, c'est-à-dire qu'elles sont construites de manière semi-supervisée. Le Chapitre 1 décrit deux cas particuliers de la classification binaire - la classification où la mesure de performance est reliée au F-score, et la classification équitable. A ces deux problèmes, des procédures semi-supervisées sont proposées. En particulier, dans le cas du F-score, il s'avère que cette méthode est optimale au sens minimax sur une classe usuelle de distributions non-paramétriques. Aussi, dans le cas de la classification équitable, la méthode proposée est consistante en terme de risque de classification, tout en satisfaisant asymptotiquement la contrainte d’égalité des chances. De plus, la procédure proposée dans ce cadre d'étude surpasse en pratique les algorithmes de pointe. Le Chapitre 3 décrit le cadre de la classification multi-classes par le biais d'ensembles de confiance. Là encore, une procédure semi-supervisée est proposée et son optimalité presque minimax est établie. Il est en outre établi qu'aucun algorithme supervisé ne peut atteindre une vitesse de convergence dite rapide. Le Chapitre 4 décrit un cas de classification multi-labels dans lequel on cherche à minimiser le taux de faux-négatifs sous réserve de contraintes de type presque sûres sur les règles de classification. Dans cette partie, deux contraintes spécifiques sont prises en compte: les classifieurs parcimonieux et ceux soumis à un contrôle des erreurs négatives à tort. Pour les premiers, un algorithme supervisé est fourni et il est montré que cet algorithme peut atteindre une vitesse de convergence rapide. Enfin, pour la seconde famille, il est montré que des hypothèses supplémentaires sont nécessaires pour obtenir des garanties théoriques sur le risque de classification / This manuscript studies several problems of constrained classification. In this frameworks of classification our goal is to construct an algorithm which performs as good as the best classifier that obeys some desired property. Plug-in type classifiers are well suited to achieve this goal. Interestingly, it is shown that in several setups these classifiers can leverage unlabeled data, that is, they are constructed in a semi-supervised manner.Chapter 2 describes two particular settings of binary classification -- classification with F-score and classification of equal opportunity. For both problems semi-supervised procedures are proposed and their theoretical properties are established. In the case of the F-score, the proposed procedure is shown to be optimal in minimax sense over a standard non-parametric class of distributions. In the case of the classification of equal opportunity the proposed algorithm is shown to be consistent in terms of the misclassification risk and its asymptotic fairness is established. Moreover, for this problem, the proposed procedure outperforms state-of-the-art algorithms in the field.Chapter 3 describes the setup of confidence set multi-class classification. Again, a semi-supervised procedure is proposed and its nearly minimax optimality is established. It is additionally shown that no supervised algorithm can achieve a so-called fast rate of convergence. In contrast, the proposed semi-supervised procedure can achieve fast rates provided that the size of the unlabeled data is sufficiently large.Chapter 4 describes a setup of multi-label classification where one aims at minimizing false negative error subject to almost sure type constraints. In this part two specific constraints are considered -- sparse predictions and predictions with the control over false negative errors. For the former, a supervised algorithm is provided and it is shown that this algorithm can achieve fast rates of convergence. For the later, it is shown that extra assumptions are necessary in order to obtain theoretical guarantees in this case
189

Training a computer vision model using semi-supervised learning and applying post-training quantizations

Vedin, Albernn January 2022 (has links)
Electrical scooters have gained a lot of attention and popularity among commuters all around the world since they entered the market. After all, electrical scooters have shown to be efficient and cost-effective mode of transportation for commuters and travelers. As of today electrical scooters have firmly established themselves in the micromobility industry, with an increasing global demand.  Although, as the industry is booming so are the accidents as well as getting into dangerous situations of riding electrical scooters. There is a growing concern regarding the safety of the scooters where more and more people are getting injured.   This research focuses on training a computer vision model using semi-supervised learning to help detect traffic rule violations and also prevent collisions for people using electrical scooters. However, applying a computer vision model on an embedded system can be challenging due to the limited capabilities of the hardware. This is where the model can enable post-training quantizations. This thesis examines which post-training quantization has the best performance and if it can perform better compared to the non-quantized model. There are three post-training quantizations that are being applied to the model, dynamic range, full integer and float16 post-training quantizations. The results showed that the non-quantized model achieved a mean average precision (mAP) of 0.03894 with a mean average training and validation loss of 22.10 and 28.11. The non-quantized model was compared with the three post-training quantizations in terms of mAP where the dynamic range post-training quantization achieve the best performance with a mAP of 0.03933.
190

Be More with Less: Scaling Deep-learning with Minimal Supervision

Yaqing Wang (12470301) 28 April 2022 (has links)
<p>  </p> <p>Large-scale deep learning models have reached previously unattainable performance for various tasks. However, the ever-growing resource consumption of neural networks generates large carbon footprint, brings difficulty for academics to engage in research and stops emerging economies from enjoying growing Artificial Intelligence (AI) benefits. To further scale AI to bring more benefits, two major challenges need to be solved. Firstly, even though large-scale deep learning models achieved remarkable success, their performance is still not satisfactory when fine-tuning with only a handful of examples, thereby hindering widespread adoption in real-world applications where a large scale of labeled data is difficult to obtain. Secondly, current machine learning models are still mainly designed for tasks in closed environments where testing datasets are highly similar to training datasets. When the deployed datasets have distribution shift relative to collected training data, we generally observe degraded performance of developed models. How to build adaptable models becomes another critical challenge. To address those challenges, in this dissertation, we focus on two topics: few-shot learning and domain adaptation, where few-shot learning aims to learn tasks with limited labeled data and domain adaption address the discrepancy between training data and testing data. In Part 1, we show our few-shot learning studies. The proposed few-shot solutions are built upon large-scale language models with evolutionary explorations from improving supervision signals, incorporating unlabeled data and improving few-shot learning abilities with lightweight fine-tuning design to reduce deployment costs. In Part 2, domain adaptation studies are introduced. We develop a progressive series of domain adaption approaches to transfer knowledge across domains efficiently to handle distribution shifts, including capturing common patterns across domains, adaptation with weak supervision and adaption to thousands of domains with limited labeled data and unlabeled data. </p>

Page generated in 0.0754 seconds