• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 2
  • 1
  • Tagged with
  • 50
  • 50
  • 47
  • 17
  • 13
  • 13
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Exploration de données pour l'optimisation de trajectoires aériennes / Data analysis for aircraft trajectory optimization

Rommel, Cédric 26 October 2018 (has links)
Cette thèse porte sur l'utilisation de données de vols pour l'optimisation de trajectoires de montée vis-à-vis de la consommation de carburant.Dans un premier temps nous nous sommes intéressé au problème d'identification de modèles de la dynamique de l'avion dans le but de les utiliser pour poser le problème d'optimisation de trajectoire à résoudre. Nous commençont par proposer une formulation statique du problème d'identification de la dynamique. Nous l'interpretons comme un problème de régression multi-tâche à structure latente, pour lequel nous proposons un modèle paramétrique. L'estimation des paramètres est faite par l'application de quelques variations de la méthode du maximum de vraisemblance.Nous suggérons également dans ce contexte d'employer des méthodes de sélection de variable pour construire une structure de modèle de régression polynomiale dépendant des données. L'approche proposée est une extension à un contexte multi-tâche structuré du bootstrap Lasso. Elle nous permet en effet de sélectionner les variables du modèle dans un contexte à fortes corrélations, tout en conservant la structure du problème inhérente à nos connaissances métier.Dans un deuxième temps, nous traitons la caractérisation des solutions du problème d'optimisation de trajectoire relativement au domaine de validité des modèles identifiés. Dans cette optique, nous proposons un critère probabiliste pour quantifier la proximité entre une courbe arbitraire et un ensemble de trajectoires échantillonnées à partir d'un même processus stochastique. Nous proposons une classe d'estimateurs de cette quantitée et nous étudions de façon plus pratique une implémentation nonparamétrique basé sur des estimateurs à noyau, et une implémentation paramétrique faisant intervenir des mélanges Gaussiens. Ce dernier est introduit comme pénalité dans le critère d'optimisation de trajectoire dans l'objectif l'intention d'obtenir directement des trajectoires consommant peu sans trop s'éloigner des régions de validité. / This thesis deals with the use of flight data for the optimization of climb trajectories with relation to fuel consumption.We first focus on methods for identifying the aircraft dynamics, in order to plug it in the trajectory optimization problem. We suggest a static formulation of the identification problem, which we interpret as a structured multi-task regression problem. In this framework, we propose parametric models and use different maximum likelihood approaches to learn the unknown parameters.Furthermore, polynomial models are considered and an extension to the structured multi-task setting of the bootstrap Lasso is used to make a consistent selection of the monomials despite the high correlations among them.Next, we consider the problem of assessing the optimized trajectories relatively to the validity region of the identified models. For this, we propose a probabilistic criterion for quantifying the closeness between an arbitrary curve and a set of trajectories sampled from the same stochastic process. We propose a class of estimators of this quantity and prove their consistency in some sense. A nonparemetric implementation based on kernel density estimators, as well as a parametric implementation based on Gaussian mixtures are presented. We introduce the later as a penalty term in the trajectory optimization problem, which allows us to control the trade-off between trajectory acceptability and consumption reduction.
42

Remembering how to walk - Using Active Dendrite Networks to Drive Physical Animations / Att minnas att gå - användning av Active Dendrite Nätverk för att driva fysiska animeringar

Henriksson, Klas January 2023 (has links)
Creating embodied agents capable of performing a wide range of tasks in different types of environments has been a longstanding challenge in deep reinforcement learning. A novel network architecture introduced in 2021 called the Active Dendrite Network [A. Iyer et al., “Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments”] designed to create sparse subnetworks for different tasks showed promising multi-tasking performance on the Meta-World [T. Yu et al., “Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning”] multi-tasking benchmark. This thesis further explores the performance of this novel architecture in a multi-tasking environment focused on physical animations and locomotion. Specifically we implement and compare the architecture to the commonly used Multi-Layer Perceptron (MLP) architecture on a multi-task reinforcement learning problem in a video-game setting consisting of training a hexapedal agent on a set of locomotion tasks involving moving at different speeds, turning and standing still. The evaluation focused on two areas: (1) Assessing the average overall performance of the Active Dendrite Network relative to the MLP on a set of locomotive scenarios featuring our behaviour sets and environments. (2) Assessing the relative impact Active Dendrite networks have on transfer learning between related tasks by comparing their performance on novel behaviours shortly after training a related behaviour. Our findings suggest that the novel Active Dendrite Network can make better use of limited network capacity compared to the MLP - the Active Dendrite Network outperformed the MLP by ∼18% on our benchmark using limited network capacity. When both networks have sufficient capacity however, there is not much difference between the two. We further find that Active Dendrite Networks have very similar transfer-learning capabilities compared to the MLP in our benchmarks.
43

Cross-Lingual and Genre-Supervised Parsing and Tagging for Low-Resource Spoken Data

Fosteri, Iliana January 2023 (has links)
Dealing with low-resource languages is a challenging task, because of the absence of sufficient data to train machine-learning models to make predictions on these languages. One way to deal with this problem is to use data from higher-resource languages, which enables the transfer of learning from these languages to the low-resource target ones. The present study focuses on dependency parsing and part-of-speech tagging of low-resource languages belonging to the spoken genre, i.e., languages whose treebank data is transcribed speech. These are the following: Beja, Chukchi, Komi-Zyrian, Frisian-Dutch, and Cantonese. Our approach involves investigating different types of transfer languages, employing MACHAMP, a state-of-the-art parser and tagger that uses contextualized word embeddings, mBERT, and XLM-R in particular. The main idea is to explore how the genre, the language similarity, none of the two, or the combination of those affect the model performance in the aforementioned downstream tasks for our selected target treebanks. Our findings suggest that in order to capture speech-specific dependency relations, we need to incorporate at least a few genre-matching source data, while language similarity-matching source data are a better candidate when the task at hand is part-of-speech tagging. We also explore the impact of multi-task learning in one of our proposed methods, but we observe minor differences in the model performance.
44

Survivability Prediction and Analysis using Interpretable Machine Learning : A Study on Protecting Ships in Naval Electronic Warfare

Rydström, Sidney January 2022 (has links)
Computer simulation is a commonly applied technique for studying electronic warfare duels. This thesis aims to apply machine learning techniques to convert simulation output data into knowledge and insights regarding defensive actions for a ship facing multiple hostile missiles. The analysis may support tactical decision-making, hence the interpretability aspect of predictions is necessary to allow for human evaluation and understanding of impacts from the explanatory variables. The final distance for the threats to the target and the probability of the threats hitting the target was modeled using a multi-layer perceptron model with a multi-task approach, including custom loss functions. The results generated in this study show that the selected methodology is more successful than a baseline using regression models. Modeling the outcome with artificial neural networks results in a black box for decision making. Therefore the concept of interpretable machine learning was applied using a post-hoc approach. Given the learned model, the features considered, and the multiple threats, the feature contributions to the model were interpreted using Kernel SHapley Additive exPlanations (SHAP). The method consists of local linear surrogate models for approximating Shapley values. The analysis primarily showed that an increased seeker activation distance was important, and the increased time for defensive actions improved the outcomes. Further, predicting the final distance to the ship at the beginning of a simulation is important and, in general, a guidance of the actual outcome. The action of firing chaff grenades in the tracking gate also had importance. More chaff grenades influenced the missiles' tracking and provided a preferable outcome from the defended ship's point of view.
45

Neural networks regularization through representation learning / Régularisation des réseaux de neurones via l'apprentissage des représentations

Belharbi, Soufiane 06 July 2018 (has links)
Les modèles de réseaux de neurones et en particulier les modèles profonds sont aujourd'hui l'un des modèles à l'état de l'art en apprentissage automatique et ses applications. Les réseaux de neurones profonds récents possèdent de nombreuses couches cachées ce qui augmente significativement le nombre total de paramètres. L'apprentissage de ce genre de modèles nécessite donc un grand nombre d'exemples étiquetés, qui ne sont pas toujours disponibles en pratique. Le sur-apprentissage est un des problèmes fondamentaux des réseaux de neurones, qui se produit lorsque le modèle apprend par coeur les données d'apprentissage, menant à des difficultés à généraliser sur de nouvelles données. Le problème du sur-apprentissage des réseaux de neurones est le thème principal abordé dans cette thèse. Dans la littérature, plusieurs solutions ont été proposées pour remédier à ce problème, tels que l'augmentation de données, l'arrêt prématuré de l'apprentissage ("early stopping"), ou encore des techniques plus spécifiques aux réseaux de neurones comme le "dropout" ou la "batch normalization". Dans cette thèse, nous abordons le sur-apprentissage des réseaux de neurones profonds sous l'angle de l'apprentissage de représentations, en considérant l'apprentissage avec peu de données. Pour aboutir à cet objectif, nous avons proposé trois différentes contributions. La première contribution, présentée dans le chapitre 2, concerne les problèmes à sorties structurées dans lesquels les variables de sortie sont à grande dimension et sont généralement liées par des relations structurelles. Notre proposition vise à exploiter ces relations structurelles en les apprenant de manière non-supervisée avec des autoencodeurs. Nous avons validé notre approche sur un problème de régression multiple appliquée à la détection de points d'intérêt dans des images de visages. Notre approche a montré une accélération de l'apprentissage des réseaux et une amélioration de leur généralisation. La deuxième contribution, présentée dans le chapitre 3, exploite la connaissance a priori sur les représentations à l'intérieur des couches cachées dans le cadre d'une tâche de classification. Cet à priori est basé sur la simple idée que les exemples d'une même classe doivent avoir la même représentation interne. Nous avons formalisé cet à priori sous la forme d'une pénalité que nous avons rajoutée à la fonction de perte. Des expérimentations empiriques sur la base MNIST et ses variantes ont montré des améliorations dans la généralisation des réseaux de neurones, particulièrement dans le cas où peu de données d'apprentissage sont utilisées. Notre troisième et dernière contribution, présentée dans le chapitre 4, montre l'intérêt du transfert d'apprentissage ("transfer learning") dans des applications dans lesquelles peu de données d'apprentissage sont disponibles. L'idée principale consiste à pré-apprendre les filtres d'un réseau à convolution sur une tâche source avec une grande base de données (ImageNet par exemple), pour les insérer par la suite dans un nouveau réseau sur la tâche cible. Dans le cadre d'une collaboration avec le centre de lutte contre le cancer "Henri Becquerel de Rouen", nous avons construit un système automatique basé sur ce type de transfert d'apprentissage pour une application médicale où l'on dispose d’un faible jeu de données étiquetées. Dans cette application, la tâche consiste à localiser la troisième vertèbre lombaire dans un examen de type scanner. L’utilisation du transfert d’apprentissage ainsi que de prétraitements et de post traitements adaptés a permis d’obtenir des bons résultats, autorisant la mise en oeuvre du modèle en routine clinique. / Neural network models and deep models are one of the leading and state of the art models in machine learning. They have been applied in many different domains. Most successful deep neural models are the ones with many layers which highly increases their number of parameters. Training such models requires a large number of training samples which is not always available. One of the fundamental issues in neural networks is overfitting which is the issue tackled in this thesis. Such problem often occurs when the training of large models is performed using few training samples. Many approaches have been proposed to prevent the network from overfitting and improve its generalization performance such as data augmentation, early stopping, parameters sharing, unsupervised learning, dropout, batch normalization, etc. In this thesis, we tackle the neural network overfitting issue from a representation learning perspective by considering the situation where few training samples are available which is the case of many real world applications. We propose three contributions. The first one presented in chapter 2 is dedicated to dealing with structured output problems to perform multivariate regression when the output variable y contains structural dependencies between its components. Our proposal aims mainly at exploiting these dependencies by learning them in an unsupervised way. Validated on a facial landmark detection problem, learning the structure of the output data has shown to improve the network generalization and speedup its training. The second contribution described in chapter 3 deals with the classification task where we propose to exploit prior knowledge about the internal representation of the hidden layers in neural networks. This prior is based on the idea that samples within the same class should have the same internal representation. We formulate this prior as a penalty that we add to the training cost to be minimized. Empirical experiments over MNIST and its variants showed an improvement of the network generalization when using only few training samples. Our last contribution presented in chapter 4 showed the interest of transfer learning in applications where only few samples are available. The idea consists in re-using the filters of pre-trained convolutional networks that have been trained on large datasets such as ImageNet. Such pre-trained filters are plugged into a new convolutional network with new dense layers. Then, the whole network is trained over a new task. In this contribution, we provide an automatic system based on such learning scheme with an application to medical domain. In this application, the task consists in localizing the third lumbar vertebra in a 3D CT scan. A pre-processing of the 3D CT scan to obtain a 2D representation and a post-processing to refine the decision are included in the proposed system. This work has been done in collaboration with the clinic "Rouen Henri Becquerel Center" who provided us with data
46

On iterated learning for task-oriented dialogue

Singhal, Soumye 01 1900 (has links)
Dans le traitement de langue et des système de dialogue, il est courant de pré-entraîner des modèles de langue sur corpus humain avant de les affiner par le biais d'un simulateur et de résolution de tâches. Malheuresement, ce type d'entrainement tend aussi à induire un phénomène connu sous le nom de dérive du langage. Concrétement, les propriétés syntaxiques et sémantiques de la langue intiallement apprise se détériorent: les agents se concentrent uniquement sur la résolution de la tâche, et non plus sur la préservation de la langue. En s'inspirant des travaux en sciences cognitives, et notamment l'apprentigssage itératif Kirby and Griffiths (2014), nous proposons ici une approche générique pour contrer cette dérive du langage. Nous avons appelé cette méthode Seeded iterated learning (SIL), ou apprentissage itératif capitalisé. Ce travail a été publié sous le titre (Lu et al., 2020b) et est présenté au chapitre 2. Afin d'émuler la transmission de la langue entre chaque génération d'agents, un agent étudiant est d'abord pré-entrainé avant d'être affiné de manière itérative, et ceci, en imitant des données échantillonnées à partir d'un agent enseignant nouvellement formé. À chaque génération, l'enseignant est créé en copiant l'agent étudiant, avant d'être de nouveau affiné en maximisant le taux de réussite de la tâche sous-jacente. Dans un second temps, nous présentons Supervised Seeded iterated learning (SSIL) dans le chapitre 3, où apprentissage itératif capitalisé avec supervision, qui a été publié sous le titre (Lu et al., 2020b). SSIL s'appuie sur SIL en le combinant avec une autre méthode populaire appelée Supervised SelfPlay (S2P) (Gupta et al., 2019), où apprentissage supervisé par auto-jeu. SSIL est capable d'atténuer les problèmes de S2P et de SIL, i.e. la dérive du langage dans les dernier stades de l'entrainement tout en préservant une plus grande diversité linguistique. Tout d'abord, nous évaluons nos méthodes dans sous la forme d'une preuve de concept à traver le Jeu de Lewis avec du langage synthetique. Dans un second temps, nous l'étendons à un jeu de traduction se utilisant du langage naturel. Dans les deux cas, nous soulignons l'efficacité de nos méthodes par rapport aux autres méthodes de la litterature. Dans le chapitre 1, nous discutons des concepts de base nécessaires à la compréhension des articles présentés dans les chapitres 2 et 3. Nous décrivons le problème spécifique du dialogue orienté tâche, y compris les approches actuelles et les défis auxquels ils sont confrontés : en particulier, la dérive linguistique. Nous donnons également un aperçu du cadre d'apprentissage itéré. Certaines sections du chapitre 1 sont empruntées aux articles pour des raisons de cohérence et de facilité de compréhension. Le chapitre 2 comprend les travaux publiés sous le nom de (Lu et al., 2020b) et le chapitre 3 comprend les travaux publiés sous le nom de (Lu et al., 2020a), avant de conclure au chapitre 4. / In task-oriented dialogue, pretraining on human corpus followed by finetuning in a simulator using selfplay suffers from a phenomenon called language drift. The syntactic and semantic properties of the learned language deteriorates as the agents only focuses on solving the task. Inspired by the iterative learning framework in cognitive science Kirby and Griffiths (2014), we propose a generic approach to counter language drift called Seeded iterated learning (SIL). This work was published as (Lu et al., 2020b) and is presented in Chapter 2. In an attempt to emulate transmission of language between generations, a pretrained student agent is iteratively refined by imitating data sampled from a newly trained teacher agent. At each generation, the teacher is created by copying the student agent, before being finetuned to maximize task completion.We further introduce Supervised Seeded iterated learning (SSIL) in Chapter 3, work which was published as (Lu et al., 2020a). SSIL builds upon SIL by combining it with the other popular method called Supervised SelfPlay (S2P) (Gupta et al., 2019). SSIL is able to mitigate the problems of both S2P and SIL namely late-stage training collapse and low language diversity. We evaluate our methods in a toy setting of Lewis Game, and then scale it up to the translation game with natural language. In both settings, we highlight the efficacy of our methods compared to the baselines. In Chapter 1, we talk about the core concepts required for understanding the papers presented in Chapters 2 and 3. We describe the specific problem of task-oriented dialogue including current approaches and the challenges they face: particularly, the challenge of language drift. We also give an overview of the iterated learning framework. Some sections in Chapter 1 are borrowed from the papers for coherence and ease of understanding. Chapter 2 comprises of the work published as (Lu et al., 2020b) and Chapter 3 comprises of the work published as (Lu et al., 2020a). Chapter 4 gives a conclusion on the work.
47

Leveraging noisy side information for disentangling of factors of variation in a supervised setting

Carrier, Pierre Luc 08 1900 (has links)
No description available.
48

Multi-fidelity Machine Learning for Perovskite Band Gap Predictions

Panayotis Thalis Manganaris (16384500) 16 June 2023 (has links)
<p>A wide range of optoelectronic applications demand semiconductors optimized for purpose.</p> <p>My research focused on data-driven identification of ABX3 Halide perovskite compositions for optimum photovoltaic absorption in solar cells.</p> <p>I trained machine learning models on previously reported datasets of halide perovskite band gaps based on first principles computations performed at different fidelities.</p> <p>Using these, I identified mixtures of candidate constituents at the A, B or X sites of the perovskite supercell which leveraged how mixed perovskite band gaps deviate from the linear interpolations predicted by Vegard's law of mixing to obtain a selection of stable perovskites with band gaps in the ideal range of 1 to 2 eV for visible light spectrum absorption.</p> <p>These models predict the perovskite band gap using the composition and inherent elemental properties as descriptors.</p> <p>This enables accurate, high fidelity prediction and screening of the much larger chemical space from which the data samples were drawn.</p> <p><br></p> <p>I utilized a recently published density functional theory (DFT) dataset of more than 1300 perovskite band gaps from four different levels of theory, added to an experimental perovskite band gap dataset of \textasciitilde{}100 points, to train random forest regression (RFR), Gaussian process regression (GPR), and Sure Independence Screening and Sparsifying Operator (SISSO) regression models, with data fidelity added as one-hot encoded features.</p> <p>I found that RFR yields the best model with a band gap root mean square error of 0.12 eV on the total dataset and 0.15 eV on the experimental points.</p> <p>SISSO provided compound features and functions for direct prediction of band gap, but errors were larger than from RFR and GPR.</p> <p>Additional insights gained from Pearson correlation and Shapley additive explanation (SHAP) analysis of learned descriptors suggest the RFR models performed best because of (a) their focus on identifying and capturing relevant feature interactions and (b) their flexibility to represent nonlinear relationships between such interactions and the band gap.</p> <p>The best model was deployed for predicting experimental band gap of 37785 hypothetical compounds.</p> <p>Based on this, we identified 1251 stable compounds with band gap predicted to be between 1 and 2 eV at experimental accuracy, successfully narrowing the candidates to about 3% of the screened compositions.</p>
49

A COMPREHENSIVE UNDERWATER DOCKING APPROACH THROUGH EFFICIENT DETECTION AND STATION KEEPING WITH LEARNING-BASED TECHNIQUES

Jalil Francisco Chavez Galaviz (17435388) 11 December 2023 (has links)
<p dir="ltr">The growing movement toward sustainable use of ocean resources is driven by the pressing need to alleviate environmental and human stressors on the planet and its oceans. From monitoring the food web to supporting sustainable fisheries and observing environmental shifts to protect against the effects of climate change, ocean observations significantly impact the Blue Economy. Acknowledging the critical role of Autonomous Underwater Vehicles (AUVs) in achieving persistent ocean exploration, this research addresses challenges focusing on the limited energy and storage capacity of AUVs, introducing a comprehensive underwater docking solution with a specific emphasis on enhancing the terminal homing phase through innovative vision algorithms leveraging neural networks.</p><p dir="ltr">The primary goal of this work is to establish a docking procedure that is failure-tolerant, scalable, and systematically validated across diverse environmental conditions. To fulfill this objective, a robust dock detection mechanism has been developed that ensures the resilience of the docking procedure through \comment{an} improved detection in different challenging environmental conditions. Additionally, the study addresses the prevalent issue of data sparsity in the marine domain by artificially generating data using CycleGAN and Artistic Style Transfer. These approaches effectively provide sufficient data for the docking detection algorithm, improving the localization of the docking station.</p><p dir="ltr">Furthermore, this work introduces methods to compress the learned docking detection model without compromising performance, enhancing the efficiency of the overall system. Alongside these advancements, a station-keeping algorithm is presented, enabling the mobile docking station to maintain position and heading while awaiting the arrival of the AUV. To leverage the sensors onboard and to take advantage of the computational resources to their fullest extent, this research has demonstrated the feasibility of simultaneously learning docking detection and marine wildlife classification through multi-task and transfer learning. This multifaceted approach not only tackles the limitations of AUVs' energy and storage capacity but also contributes to the robustness, scalability, and systematic validation of underwater docking procedures, aligning with the broader goals of sustainable ocean exploration and the blue economy.</p>
50

Multi-Scale Task Dynamics in Transfer and Multi-Task Learning : Towards Efficient Perception for Autonomous Driving / Flerskalig Uppgiftsdynamik vid Överförings- och Multiuppgiftsinlärning : Mot Effektiv Perception för Självkörande Fordon

Ekman von Huth, Simon January 2023 (has links)
Autonomous driving technology has the potential to revolutionize the way we think about transportation and its impact on society. Perceiving the environment is a key aspect of autonomous driving, which involves multiple computer vision tasks. Multi-scale deep learning has dramatically improved the performance on many computer vision tasks, but its practical use in autonomous driving is limited by the available resources in embedded systems. Multi-task learning offers a solution to this problem by allowing more compact deep learning models that share parameters between tasks. However, not all tasks benefit from being learned together. One way of avoiding task interference during training is to learn tasks in sequence, with each task providing useful information for the next – a scheme which builds on transfer learning. Multi-task and transfer dynamics are both concerned with the relationships between tasks, but have previously only been studied separately. This Master’s thesis investigates how different computer vision tasks relate to each other in the context of multi-task and transfer learning, using a state-ofthe-art efficient multi-scale deep learning model. Through an experimental research methodology, the performance on semantic segmentation, depth estimation, and object detection were evaluated on the Virtual KITTI 2 dataset in a multi-task and transfer learning setting. In addition, transfer learning with a frozen encoder was compared to constrained encoder fine tuning, to uncover the effects of fine-tuning on task dynamics. The results suggest that findings from previous work regarding semantic segmentation and depth estimation in multi-task learning generalize to multi-scale learning on autonomous driving data. Further, no statistically significant correlation was found between multitask learning dynamics and transfer learning dynamics. An analysis of the results from transfer learning indicate that some tasks might be more sensitive to fine-tuning than others, suggesting that transferring with a frozen encoder only captures a subset of the complexities involved in transfer relationships. Regarding object detection, it is observed to negatively impact the performance on other tasks during multi-task learning, but might be a valuable task to transfer from due to lower annotation costs. Possible avenues for future work include applying the used methodology to real-world datasets and exploring ways of utilizing the presented findings for more efficient perception algorithms. / Självkörande teknik har potential att revolutionera transport och dess påverkan på samhället. Självkörning medför ett flertal uppgifter inom datorseende, som bäst löses med djupa neurala nätverk som lär sig att tolka bilder på flera olika skalor. Begränsningar i mobil hårdvara kräver dock att tekniker som multiuppgifts- och sekventiell inlärning används för att minska neurala nätverkets fotavtryck, där sekventiell inlärning bygger på överföringsinlärning. Dynamiken bakom både multiuppgiftsinlärning och överföringsinlärning kan till stor del krediteras relationen mellan olika uppdrag. Tidigare studier har dock bara undersökt dessa dynamiker var för sig. Detta examensarbete undersöker relationen mellan olika uppdrag inom datorseende från perspektivet av både multiuppgifts- och överföringsinlärning. En experimentell forskningsmetodik användes för att jämföra och undersöka tre uppgifter inom datorseende på datasetet Virtual KITTI 2. Resultaten stärker tidigare forskning och föreslår att tidigare fynd kan generaliseras till flerskaliga nätverk och data för självkörning. Resultaten visar inte på någon signifikant korrelation mellan multiuppgift- och överföringsdynamik. Slutligen antyder resultaten att vissa uppgiftspar ställer högre krav än andra på att nätverket anpassas efter överföring.

Page generated in 0.0901 seconds