• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 10
  • 10
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 321
  • 321
  • 144
  • 122
  • 116
  • 98
  • 73
  • 66
  • 61
  • 57
  • 57
  • 54
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Segmentace lézí roztroušené sklerózy pomocí hlubokých neuronových sítí / Segmentation of multiple sclerosis lesions using deep neural networks

Sasko, Dominik January 2021 (has links)
Hlavným zámerom tejto diplomovej práce bola automatická segmentácia lézií sklerózy multiplex na snímkoch MRI. V rámci práce boli otestované najnovšie metódy segmentácie s využitím hlbokých neurónových sietí a porovnané prístupy inicializácie váh sietí pomocou preneseného učenia (transfer learning) a samoriadeného učenia (self-supervised learning). Samotný problém automatickej segmentácie lézií sklerózy multiplex je veľmi náročný, a to primárne kvôli vysokej nevyváženosti datasetu (skeny mozgov zvyčajne obsahujú len malé množstvo poškodeného tkaniva). Ďalšou výzvou je manuálna anotácia týchto lézií, nakoľko dvaja rozdielni doktori môžu označiť iné časti mozgu ako poškodené a hodnota Dice Coefficient týchto anotácií je približne 0,86. Možnosť zjednodušenia procesu anotovania lézií automatizáciou by mohlo zlepšiť výpočet množstva lézií, čo by mohlo viesť k zlepšeniu diagnostiky individuálnych pacientov. Našim cieľom bolo navrhnutie dvoch techník využívajúcich transfer learning na predtrénovanie váh, ktoré by neskôr mohli zlepšiť výsledky terajších segmentačných modelov. Teoretická časť opisuje rozdelenie umelej inteligencie, strojového učenia a hlbokých neurónových sietí a ich využitie pri segmentácii obrazu. Následne je popísaná skleróza multiplex, jej typy, symptómy, diagnostika a liečba. Praktická časť začína predspracovaním dát. Najprv boli skeny mozgu upravené na rovnaké rozlíšenie s rovnakou veľkosťou voxelu. Dôvodom tejto úpravy bolo využitie troch odlišných datasetov, v ktorých boli skeny vytvárané rozličnými prístrojmi od rôznych výrobcov. Jeden dataset taktiež obsahoval lebku, a tak bolo nutné jej odstránenie pomocou nástroju FSL pre ponechanie samotného mozgu pacienta. Využívali sme 3D skeny (FLAIR, T1 a T2 modality), ktoré boli postupne rozdelené na individuálne 2D rezy a použité na vstup neurónovej siete s enkodér-dekodér architektúrou. Dataset na trénovanie obsahoval 6720 rezov s rozlíšením 192 x 192 pixelov (po odstránení rezov, ktorých maska neobsahovala žiadnu hodnotu). Využitá loss funkcia bola Combo loss (kombinácia Dice Loss s upravenou Cross-Entropy). Prvá metóda sa zameriavala na využitie predtrénovaných váh z ImageNet datasetu na enkodér U-Net architektúry so zamknutými váhami enkodéra, resp. bez zamknutia a následného porovnania s náhodnou inicializáciou váh. V tomto prípade sme použili len FLAIR modalitu. Transfer learning dokázalo zvýšiť sledovanú metriku z hodnoty približne 0,4 na 0,6. Rozdiel medzi zamknutými a nezamknutými váhami enkodéru sa pohyboval okolo 0,02. Druhá navrhnutá technika používala self-supervised kontext enkodér s Generative Adversarial Networks (GAN) na predtrénovanie váh. Táto sieť využívala všetky tri spomenuté modality aj s prázdnymi rezmi masiek (spolu 23040 obrázkov). Úlohou GAN siete bolo dotvoriť sken mozgu, ktorý bol prekrytý čiernou maskou v tvare šachovnice. Takto naučené váhy boli následne načítané do enkodéru na aplikáciu na náš segmentačný problém. Tento experiment nevykazoval lepšie výsledky, s hodnotou DSC 0,29 a 0,09 (nezamknuté a zamknuté váhy enkodéru). Prudké zníženie metriky mohlo byť spôsobené použitím predtrénovaných váh na vzdialených problémoch (segmentácia a self-supervised kontext enkodér), ako aj zložitosť úlohy kvôli nevyváženému datasetu.
312

Neural networks regularization through representation learning / Régularisation des réseaux de neurones via l'apprentissage des représentations

Belharbi, Soufiane 06 July 2018 (has links)
Les modèles de réseaux de neurones et en particulier les modèles profonds sont aujourd'hui l'un des modèles à l'état de l'art en apprentissage automatique et ses applications. Les réseaux de neurones profonds récents possèdent de nombreuses couches cachées ce qui augmente significativement le nombre total de paramètres. L'apprentissage de ce genre de modèles nécessite donc un grand nombre d'exemples étiquetés, qui ne sont pas toujours disponibles en pratique. Le sur-apprentissage est un des problèmes fondamentaux des réseaux de neurones, qui se produit lorsque le modèle apprend par coeur les données d'apprentissage, menant à des difficultés à généraliser sur de nouvelles données. Le problème du sur-apprentissage des réseaux de neurones est le thème principal abordé dans cette thèse. Dans la littérature, plusieurs solutions ont été proposées pour remédier à ce problème, tels que l'augmentation de données, l'arrêt prématuré de l'apprentissage ("early stopping"), ou encore des techniques plus spécifiques aux réseaux de neurones comme le "dropout" ou la "batch normalization". Dans cette thèse, nous abordons le sur-apprentissage des réseaux de neurones profonds sous l'angle de l'apprentissage de représentations, en considérant l'apprentissage avec peu de données. Pour aboutir à cet objectif, nous avons proposé trois différentes contributions. La première contribution, présentée dans le chapitre 2, concerne les problèmes à sorties structurées dans lesquels les variables de sortie sont à grande dimension et sont généralement liées par des relations structurelles. Notre proposition vise à exploiter ces relations structurelles en les apprenant de manière non-supervisée avec des autoencodeurs. Nous avons validé notre approche sur un problème de régression multiple appliquée à la détection de points d'intérêt dans des images de visages. Notre approche a montré une accélération de l'apprentissage des réseaux et une amélioration de leur généralisation. La deuxième contribution, présentée dans le chapitre 3, exploite la connaissance a priori sur les représentations à l'intérieur des couches cachées dans le cadre d'une tâche de classification. Cet à priori est basé sur la simple idée que les exemples d'une même classe doivent avoir la même représentation interne. Nous avons formalisé cet à priori sous la forme d'une pénalité que nous avons rajoutée à la fonction de perte. Des expérimentations empiriques sur la base MNIST et ses variantes ont montré des améliorations dans la généralisation des réseaux de neurones, particulièrement dans le cas où peu de données d'apprentissage sont utilisées. Notre troisième et dernière contribution, présentée dans le chapitre 4, montre l'intérêt du transfert d'apprentissage ("transfer learning") dans des applications dans lesquelles peu de données d'apprentissage sont disponibles. L'idée principale consiste à pré-apprendre les filtres d'un réseau à convolution sur une tâche source avec une grande base de données (ImageNet par exemple), pour les insérer par la suite dans un nouveau réseau sur la tâche cible. Dans le cadre d'une collaboration avec le centre de lutte contre le cancer "Henri Becquerel de Rouen", nous avons construit un système automatique basé sur ce type de transfert d'apprentissage pour une application médicale où l'on dispose d’un faible jeu de données étiquetées. Dans cette application, la tâche consiste à localiser la troisième vertèbre lombaire dans un examen de type scanner. L’utilisation du transfert d’apprentissage ainsi que de prétraitements et de post traitements adaptés a permis d’obtenir des bons résultats, autorisant la mise en oeuvre du modèle en routine clinique. / Neural network models and deep models are one of the leading and state of the art models in machine learning. They have been applied in many different domains. Most successful deep neural models are the ones with many layers which highly increases their number of parameters. Training such models requires a large number of training samples which is not always available. One of the fundamental issues in neural networks is overfitting which is the issue tackled in this thesis. Such problem often occurs when the training of large models is performed using few training samples. Many approaches have been proposed to prevent the network from overfitting and improve its generalization performance such as data augmentation, early stopping, parameters sharing, unsupervised learning, dropout, batch normalization, etc. In this thesis, we tackle the neural network overfitting issue from a representation learning perspective by considering the situation where few training samples are available which is the case of many real world applications. We propose three contributions. The first one presented in chapter 2 is dedicated to dealing with structured output problems to perform multivariate regression when the output variable y contains structural dependencies between its components. Our proposal aims mainly at exploiting these dependencies by learning them in an unsupervised way. Validated on a facial landmark detection problem, learning the structure of the output data has shown to improve the network generalization and speedup its training. The second contribution described in chapter 3 deals with the classification task where we propose to exploit prior knowledge about the internal representation of the hidden layers in neural networks. This prior is based on the idea that samples within the same class should have the same internal representation. We formulate this prior as a penalty that we add to the training cost to be minimized. Empirical experiments over MNIST and its variants showed an improvement of the network generalization when using only few training samples. Our last contribution presented in chapter 4 showed the interest of transfer learning in applications where only few samples are available. The idea consists in re-using the filters of pre-trained convolutional networks that have been trained on large datasets such as ImageNet. Such pre-trained filters are plugged into a new convolutional network with new dense layers. Then, the whole network is trained over a new task. In this contribution, we provide an automatic system based on such learning scheme with an application to medical domain. In this application, the task consists in localizing the third lumbar vertebra in a 3D CT scan. A pre-processing of the 3D CT scan to obtain a 2D representation and a post-processing to refine the decision are included in the proposed system. This work has been done in collaboration with the clinic "Rouen Henri Becquerel Center" who provided us with data
313

Etude et prédiction d'attention visuelle avec les outils d'apprentissage profond en vue d'évaluation des patients atteints des maladies neuro-dégénératives / Study and prediction of visual attention with deep learning net- works in view of assessment of patients with neurodegenerative diseases

Chaabouni, Souad 08 December 2017 (has links)
Cette thèse est motivée par le diagnostic et l’évaluation des maladies neuro-dégénératives et dans le but de diagnostique sur la base de l’attention visuelle.Néanmoins, le dépistage à grande échelle de la population n’est possible que si des modèles de prédiction automatique suffisamment robustes peuvent être construits. Dans ce contexte nous nous intéressons `a la conception et le développement des modèles de prédiction automatique pour un contenu visuel spécifique à utiliser dans l’expérience psycho-visuelle impliquant des patients atteints des maladies neuro-dégénératives. La difficulté d’une telle prédiction réside dans une très faible quantité de données d’entraînement. Les modèles de saillance visuelle ne peuvent pas être fondés sur les caractérisitiques “bottom-up” uniquement, comme le suggère la théorie de l’intégration des caractéristiques. La composante “top-down” de l’attention visuelle humaine devient prépondérante au fur et à mesure d’observation de la scène visuelle. L’attention visuelle peut-être prédite en se basant sur les scènes déjà observées. Les réseaux de convolution profonds (CNN) se sont révèlés être un outil puissant pour prédire les zones saillantes dans les images statiques.Dans le but de construire un modèle de prédiction automatique pour les zones saillantes dans les vidéos naturels et intentionnellement dégradées, nous avons conçu une architecture spécifique de CNN profond. Pour surmonter le manque de données d’apprentissage,nous avons conçu un système d’apprentissage par transfert dérivé de la méthode de Bengio.Nous mesurons ses performances lors de la prédiction de régions saillantes. Les r´esultatsobtenus sont int´eressants concernant la r´eaction des sujets t´emoins normaux contre leszones d´egrad´ees dans les vid´eos. La comparaison de la carte de saillance pr´edite des vid´eosintentionnellement d´egrad´ees avec des cartes de densit´e de fixation du regard et d’autresmod`eles de r´ef´erence montre l’int´erˆet du mod`ele d´evelopp´e. / This thesis is motivated by the diagnosis and the evaluation of the dementia diseasesand with the aim of predicting if a new recorded gaze presents a complaint of thesediseases. Nevertheless, large-scale population screening is only possible if robust predictionmodels can be constructed. In this context, we are interested in the design and thedevelopment of automatic prediction models for specific visual content to be used in thepsycho-visual experience involving patients with dementia (PwD). The difficulty of sucha prediction lies in a very small amount of training data.Visual saliency models cannot be founded only on bottom-up features, as suggested byfeature integration theory. The top-down component of human visual attention becomesprevalent as human observers explore the visual scene. Visual saliency can be predictedon the basis of seen data. Deep Convolutional Neural Networks (CNN) have proven tobe a powerful tool for prediction of salient areas in static images. In order to constructan automatic prediction model for the salient areas in natural and intentionally degradedvideos, we have designed a specific CNN architecture. To overcome the lack of learningdata we designed a transfer learning scheme derived from bengio’s method. We measureits performances when predicting salient regions. The obtained results are interestingregarding the reaction of normal control subjects against degraded areas in videos. Thepredicted saliency map of intentionally degraded videos gives an interesting results comparedto gaze fixation density maps and other reference models.
314

Utilisation du plongement du domaine pour l’adaptation non supervisée en traduction automatique

Frenette, Xavier 11 1900 (has links)
L'industrie de la traduction utilise de plus en plus des modèles de traduction automatique. Des modèles dits « universels » sont capables d'obtenir de bonnes performances lorsqu'évalués sur un large ensemble de domaines, mais leurs performances sont souvent limitées lorsqu'ils sont testés sur des domaines précis. Or, les traductions doivent être adaptées au style, au sujet et au vocabulaire des différents domaines, en particulier ceux des nouveaux (pensons aux textes reliés à la COVID-19). Entrainer un nouveau modèle pour chaque domaine demande du temps, des outils technologiques spécialisés et de grands ensembles de données. De telles ressources ne sont généralement pas disponibles. Nous proposons, dans ce mémoire, d'évaluer une nouvelle technique de transfert d'apprentissage pour l'adaptation à un domaine précis. La technique peut s'adapter rapidement à tout nouveau domaine, sans entrainement supplémentaire et de façon non supervisée. À partir d'un échantillon de phrases du nouveau domaine, le modèle lui calcule une représentation vectorielle qu'il utilise ensuite pour guider ses traductions. Pour calculer ce plongement de domaine, nous testons cinq différentes techniques. Nos expériences démontrent qu'un modèle qui utilise un tel plongement réussit à extraire l'information qui s'y trouve pour guider ses traductions. Nous obtenons des résultats globalement supérieurs à un modèle de traduction qui aurait été entrainé sur les mêmes données, mais sans utiliser le plongement. Notre modèle est plus avantageux que d'autres techniques d'adaptation de domaine puisqu'il est non supervisé, qu'il ne requiert aucun entrainement supplémentaire pour s'adapter et qu'il s'adapte très rapidement (en quelques secondes) uniquement à partir d'un petit ensemble de phrases. / Machine translation models usage is increasing in the translation industry. What we could call "universal" models attain good performances when evaluated over a wide set of domains, but their performance is often limited when tested on specific domains. Translations must be adapted to the style, subjects and vocabulary of different domains, especially new ones (the COVID-19 texts, for example). Training a new model on each domain requires time, specialized technological tools and large data sets. Such resources are generally not available. In this master's thesis, we propose to evaluate a novel learning transfer technique for domain adaptation. The technique can adapt quickly to any new domain, without additional training, and in an unsupervised manner. Given a sample of sentences from the new domain, the model computes a vector representation for the domain that is then used to guide its translations. To compute this domain embedding, we test five different techniques. Our experiments show that a model that uses this embedding obtains globally superior performances than a translation model that would have been trained on the same data, but without the embedding. Our model is more advantageous than other domain adaptation techniques since it is unsupervised, requires no additional training to adapt, and adapts very quickly (within seconds) from a small set of sentences only.
315

The Classification of Kinase Inhibitors on Five Channel Cell Painting Data Using Deep Learning

Yang, Ximeng January 2021 (has links)
Purpose This project aims to explore the classification method of kinase inhibitors with five-channel cell painting image data based on the deep learning model. Methods A ResNet50 transfer learning model was used as the starting point to build the deep neural network (DNN) model, where different DNN parameters were selected to make the deep learning model more suitable for the cell painting data. Two different adaptive layers (adaptive average pooling 3D and convolution 2D) were added separately before the ResNet50 transfer learning model to adapt the five-layer cell painting image to the neural network. In addition, the skimage.transform.resize function was used to compress the five-layer cell painting image. Results The proposed deep learning model demonstrates the effectiveness in all three classification experiments. The proposed model performs particularly well in classifying among control, EGFR, PIKK and CDK kinase inhibitors families. It achieves an F1-score of 0.7764 on all four targets and has a 93\% accuracy rate in the PIKK kinase inhibitors family. The adaptive average pooling 3D layer successfully adapts the five-layer images to the model, resulting in an improved effect. The training time of the model is significantly reduced to one-fortieth by compressing the image size. Conclusion The proposed model achieved convincing effectiveness in classifying families, which showed progress in building the deep learning model to classify kinase inhibitors on five-channel cell painting data. This study also proved the feasibility of directly inputting five-channel cell painting images to DNN. In addition, the speed of the model increased sharply by compressing the image size without an obvious loss of data information.
316

All Negative on the Western Front: Analyzing the Sentiment of the Russian News Coverage of Sweden with Generic and Domain-Specific Multinomial Naive Bayes and Support Vector Machines Classifiers / På västfronten intet gott: attitydanalys av den ryska nyhetsrapporteringen om Sverige med generiska och domänspecifika Multinomial Naive Bayes- och Support Vector Machines-klassificerare

Michel, David January 2021 (has links)
This thesis explores to what extent Multinomial Naive Bayes (MNB) and Support Vector Machines (SVM) classifiers can be used to determine the polarity of news, specifically the news coverage of Sweden by the Russian state-funded news outlets RT and Sputnik. Three experiments are conducted.  In the first experiment, an MNB and an SVM classifier are trained with the Large Movie Review Dataset (Maas et al., 2011) with a varying number of samples to determine how training data size affects classifier performance.  In the second experiment, the classifiers are trained with 300 positive, negative, and neutral news articles (Agarwal et al., 2019) and tested on 95 RT and Sputnik news articles about Sweden (Bengtsson, 2019) to determine if the domain specificity of the training data outweighs its limited size.  In the third experiment, the movie-trained classifiers are put up against the domain-specific classifiers to determine if well-trained classifiers from another domain perform better than relatively untrained, domain-specific classifiers.  Four different types of feature sets (unigrams, unigrams without stop words removal, bigrams, trigrams) were used in the experiments. Some of the model parameters (TF-IDF vs. feature count and SVM’s C parameter) were optimized with 10-fold cross-validation.  Other than the superior performance of SVM, the results highlight the need for comprehensive and domain-specific training data when conducting machine learning tasks, as well as the benefits of feature engineering, and to a limited extent, the removal of stop words. Interestingly, the classifiers performed the best on the negative news articles, which made up most of the test set (and possibly of Russian news coverage of Sweden in general).
317

[en] CONVOLUTIONAL NETWORKS APPLIED TO SEMANTIC SEGMENTATION OF SEISMIC IMAGES / [pt] REDES CONVOLUCIONAIS APLICADAS À SEGMENTAÇÃO SEMÂNTICA DE IMAGENS SÍSMICAS

MATEUS CABRAL TORRES 10 August 2021 (has links)
[pt] A partir de melhorias incrementais em uma conhecida rede neural convolucional (U-Net), diferentes técnicas são avaliadas quanto às suas performances na tarefa de segmentação semântica em imagens sísmicas. Mais especificamente, procura-se a identificação e delineamento de estruturas salinas no subsolo, o que é de grande relevância na indústria de óleo e gás para a exploração de petróleo em camadas pré-sal, por exemplo. Além disso, os desafios apresentados no tratamento destas imagens sísmicas se assemelham em muito aos encontrados em tarefas de áreas médicas como identificação de tumores e segmentação de tecidos, o que torna o estudo da tarefa em questão ainda mais valioso. Este trabalho pretende sugerir uma metodologia adequada de abordagem à tarefa e produzir redes neurais capazes de segmentar imagens sísmicas com bons resultados dentro das métricas utilizadas. Para alcançar estes objetivos, diferentes estruturas de redes, transferência de aprendizado e técnicas de aumentação de dados são testadas em dois datasets com diferentes níveis de complexidade. / [en] Through incremental improvements in a well-known convolutional neural network (U-Net), different techniques are evaluated regarding their performance on the task of semantic segmentation of seismic images. More specifically, the objective is the better identification and outline of subsurface salt structures, which is a task of great relevance for the oil and gas industry in the exploration of pre-salt layers, for example. Besides that application, the challenges imposed by the treatment of seismic images also resemble those found in medical fields like tumor detection and tissue segmentation, which makes the study of this task even more valuable. This work seeks to suggest a suitable methodology for the task and to yield neural networks that are capable of performing semantic segmentation of seismic images with good results regarding specific metrics. For that purpose, different network structures, transfer learning and data augmentation techniques are applied in two datasets with different levels of complexity.
318

A COMPREHENSIVE UNDERWATER DOCKING APPROACH THROUGH EFFICIENT DETECTION AND STATION KEEPING WITH LEARNING-BASED TECHNIQUES

Jalil Francisco Chavez Galaviz (17435388) 11 December 2023 (has links)
<p dir="ltr">The growing movement toward sustainable use of ocean resources is driven by the pressing need to alleviate environmental and human stressors on the planet and its oceans. From monitoring the food web to supporting sustainable fisheries and observing environmental shifts to protect against the effects of climate change, ocean observations significantly impact the Blue Economy. Acknowledging the critical role of Autonomous Underwater Vehicles (AUVs) in achieving persistent ocean exploration, this research addresses challenges focusing on the limited energy and storage capacity of AUVs, introducing a comprehensive underwater docking solution with a specific emphasis on enhancing the terminal homing phase through innovative vision algorithms leveraging neural networks.</p><p dir="ltr">The primary goal of this work is to establish a docking procedure that is failure-tolerant, scalable, and systematically validated across diverse environmental conditions. To fulfill this objective, a robust dock detection mechanism has been developed that ensures the resilience of the docking procedure through \comment{an} improved detection in different challenging environmental conditions. Additionally, the study addresses the prevalent issue of data sparsity in the marine domain by artificially generating data using CycleGAN and Artistic Style Transfer. These approaches effectively provide sufficient data for the docking detection algorithm, improving the localization of the docking station.</p><p dir="ltr">Furthermore, this work introduces methods to compress the learned docking detection model without compromising performance, enhancing the efficiency of the overall system. Alongside these advancements, a station-keeping algorithm is presented, enabling the mobile docking station to maintain position and heading while awaiting the arrival of the AUV. To leverage the sensors onboard and to take advantage of the computational resources to their fullest extent, this research has demonstrated the feasibility of simultaneously learning docking detection and marine wildlife classification through multi-task and transfer learning. This multifaceted approach not only tackles the limitations of AUVs' energy and storage capacity but also contributes to the robustness, scalability, and systematic validation of underwater docking procedures, aligning with the broader goals of sustainable ocean exploration and the blue economy.</p>
319

Multi-Scale Task Dynamics in Transfer and Multi-Task Learning : Towards Efficient Perception for Autonomous Driving / Flerskalig Uppgiftsdynamik vid Överförings- och Multiuppgiftsinlärning : Mot Effektiv Perception för Självkörande Fordon

Ekman von Huth, Simon January 2023 (has links)
Autonomous driving technology has the potential to revolutionize the way we think about transportation and its impact on society. Perceiving the environment is a key aspect of autonomous driving, which involves multiple computer vision tasks. Multi-scale deep learning has dramatically improved the performance on many computer vision tasks, but its practical use in autonomous driving is limited by the available resources in embedded systems. Multi-task learning offers a solution to this problem by allowing more compact deep learning models that share parameters between tasks. However, not all tasks benefit from being learned together. One way of avoiding task interference during training is to learn tasks in sequence, with each task providing useful information for the next – a scheme which builds on transfer learning. Multi-task and transfer dynamics are both concerned with the relationships between tasks, but have previously only been studied separately. This Master’s thesis investigates how different computer vision tasks relate to each other in the context of multi-task and transfer learning, using a state-ofthe-art efficient multi-scale deep learning model. Through an experimental research methodology, the performance on semantic segmentation, depth estimation, and object detection were evaluated on the Virtual KITTI 2 dataset in a multi-task and transfer learning setting. In addition, transfer learning with a frozen encoder was compared to constrained encoder fine tuning, to uncover the effects of fine-tuning on task dynamics. The results suggest that findings from previous work regarding semantic segmentation and depth estimation in multi-task learning generalize to multi-scale learning on autonomous driving data. Further, no statistically significant correlation was found between multitask learning dynamics and transfer learning dynamics. An analysis of the results from transfer learning indicate that some tasks might be more sensitive to fine-tuning than others, suggesting that transferring with a frozen encoder only captures a subset of the complexities involved in transfer relationships. Regarding object detection, it is observed to negatively impact the performance on other tasks during multi-task learning, but might be a valuable task to transfer from due to lower annotation costs. Possible avenues for future work include applying the used methodology to real-world datasets and exploring ways of utilizing the presented findings for more efficient perception algorithms. / Självkörande teknik har potential att revolutionera transport och dess påverkan på samhället. Självkörning medför ett flertal uppgifter inom datorseende, som bäst löses med djupa neurala nätverk som lär sig att tolka bilder på flera olika skalor. Begränsningar i mobil hårdvara kräver dock att tekniker som multiuppgifts- och sekventiell inlärning används för att minska neurala nätverkets fotavtryck, där sekventiell inlärning bygger på överföringsinlärning. Dynamiken bakom både multiuppgiftsinlärning och överföringsinlärning kan till stor del krediteras relationen mellan olika uppdrag. Tidigare studier har dock bara undersökt dessa dynamiker var för sig. Detta examensarbete undersöker relationen mellan olika uppdrag inom datorseende från perspektivet av både multiuppgifts- och överföringsinlärning. En experimentell forskningsmetodik användes för att jämföra och undersöka tre uppgifter inom datorseende på datasetet Virtual KITTI 2. Resultaten stärker tidigare forskning och föreslår att tidigare fynd kan generaliseras till flerskaliga nätverk och data för självkörning. Resultaten visar inte på någon signifikant korrelation mellan multiuppgift- och överföringsdynamik. Slutligen antyder resultaten att vissa uppgiftspar ställer högre krav än andra på att nätverket anpassas efter överföring.
320

3D OBJECT DETECTION USING VIRTUAL ENVIRONMENT ASSISTED DEEP NETWORK TRAINING

Ashley S Dale (8771429) 07 January 2021 (has links)
<div> <div> <div> <p>An RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with a small sample of real-world image data and used to train the Mask R-CNN (MR-CNN) architecture in a variety of configurations. When the MR-CNN architecture was initialized with MS COCO weights and the heads were trained with a mix of synthetic data and real world data, F1 scores improved in four of the five classes: The average maximum F1-score of all classes and all epochs for the networks trained with synthetic data is F1∗ = 0.91, compared to F1 = 0.89 for the networks trained exclusively with real data, and the standard deviation of the maximum mean F1-score for synthetically trained networks is σ∗ <sub>F1 </sub>= 0.015, compared to σF 1 = 0.020 for the networks trained exclusively with real data. Various backgrounds in synthetic data were shown to have negligible impact on F1 scores, opening the door to abstract backgrounds and minimizing the need for intensive synthetic data fabrication. When the MR-CNN architecture was initialized with MS COCO weights and depth data was included in the training data, the net- work was shown to rely heavily on the initial convolutional input to feed features into the network, the image depth channel was shown to influence mask generation, and the image color channels were shown to influence object classification. A set of latent variables for a subset of the synthetic datatset was generated with a Variational Autoencoder then analyzed using Principle Component Analysis and Uniform Manifold Projection and Approximation (UMAP). The UMAP analysis showed no meaningful distinction between real-world and synthetic data, and a small bias towards clustering based on image background. </p></div></div></div>

Page generated in 0.0232 seconds