Spelling suggestions: "subject:"transferlearning"" "subject:"transferleading""
151 |
Land Use and Land Cover Classification Using Deep Learning TechniquesJanuary 2016 (has links)
abstract: Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set. / Dissertation/Thesis / Masters Thesis Computer Science 2016
|
152 |
Learning information retrieval functions and parameters on unlabeled collections / Apprentissage des fonctions de la recherche d'information et leurs paramètres sur des collections non-étiquetéesGoswami, Parantapa 06 October 2014 (has links)
Dans cette thèse, nous nous intéressons (a) à l'estimation des paramètres de modèles standards de Recherche d'Information (RI), et (b) à l'apprentissage de nouvelles fonctions de RI. Nous explorons d'abord plusieurs méthodes permettant, a priori, d'estimer le paramètre de collection des modèles d'information (chapitre. Jusqu'à présent, ce paramètre était fixé au nombre moyen de documents dans lesquels un mot donné apparaissait. Nous présentons ici plusieurs méthodes d'estimation de ce paramètre et montrons qu'il est possible d'améliorer les performances du système de recherche d'information lorsque ce paramètre est estimé de façon adéquate. Pour cela, nous proposons une approche basée sur l'apprentissage de transfert qui peut prédire les valeurs de paramètre de n'importe quel modèle de RI. Cette approche utilise des jugements de pertinence d'une collection de source existante pour apprendre une fonction de régression permettant de prédire les paramètres optimaux d'un modèle de RI sur une nouvelle collection cible non-étiquetée. Avec ces paramètres prédits, les modèles de RI sont non-seulement plus performants que les même modèles avec leurs paramètres par défaut mais aussi avec ceux optimisés en utilisant les jugements de pertinence de la collection cible. Nous étudions ensuite une technique de transfert permettant d'induire des pseudo-jugements de pertinence des couples de documents par rapport à une requête donnée d'une collection cible. Ces jugements de pertinence sont obtenus grâce à une grille d'information récapitulant les caractéristiques principale d'une collection. Ces pseudo-jugements de pertinence sont ensuite utilisés pour apprendre une fonction d'ordonnancement en utilisant n'importe quel algorithme d'ordonnancement existant. Dans les nombreuses expériences que nous avons menées, cette technique permet de construire une fonction d'ordonnancement plus performante que d'autres proposées dans l'état de l'art. Dans le dernier chapitre de cette thèse, nous proposons une technique exhaustive pour rechercher des fonctions de RI dans l'espace des fonctions existantes en utilisant un grammaire permettant de restreindre l'espace de recherche et en respectant les contraintes de la RI. Certaines fonctions obtenues sont plus performantes que les modèles de RI standards. / The present study focuses on (a) predicting parameters of already existing standard IR models and (b) learning new IR functions. We first explore various statistical methods to estimate the collection parameter of family of information based models (Chapter 2). This parameter determines the behavior of a term in the collection. In earlier studies, it was set to the average number of documents where the term appears, without full justification. We introduce here a fully formalized estimation method which leads to improved versions of these models over the original ones. But the method developed is applicable only to estimate the collection parameter under the information model framework. To alleviate this we propose a transfer learning approach which can predict values for any parameter for any IR model (Chapter 3). This approach uses relevance judgments on a past collection to learn a regression function which can infer parameter values for each single query on a new unlabeled target collection. The proposed method not only outperforms the standard IR models with their default parameter values, but also yields either better or at par performance with popular parameter tuning methods which use relevance judgments on target collection. We then investigate the application of transfer learning based techniques to directly transfer relevance information from a source collection to derive a "pseudo-relevance" judgment on an unlabeled target collection (Chapter 4). From this derived pseudo-relevance a ranking function is learned using any standard learning algorithm which can rank documents in the target collection. In various experiments the learned function outperformed standard IR models as well as other state-of-the-art transfer learning based algorithms. Though a ranking function learned through a learning algorithm is effective still it has a predefined form based on the learning algorithm used. We thus introduce an exhaustive discovery approach to search ranking functions from a space of simple functions (Chapter 5). Through experimentation we found that some of the discovered functions are highly competitive with respect to standard IR models.
|
153 |
Deteção de Spam baseada na evolução das características com presença de Concept DriftHenke, Márcia 30 March 2015 (has links)
Submitted by Geyciane Santos (geyciane_thamires@hotmail.com) on 2015-11-12T20:17:58Z
No. of bitstreams: 1
Tese - Márcia Henke.pdf: 2984974 bytes, checksum: a103355c1a7895956d40d4fa9422347a (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-11-16T18:36:36Z (GMT) No. of bitstreams: 1
Tese - Márcia Henke.pdf: 2984974 bytes, checksum: a103355c1a7895956d40d4fa9422347a (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-11-16T18:43:03Z (GMT) No. of bitstreams: 1
Tese - Márcia Henke.pdf: 2984974 bytes, checksum: a103355c1a7895956d40d4fa9422347a (MD5) / Made available in DSpace on 2015-11-16T18:43:03Z (GMT). No. of bitstreams: 1
Tese - Márcia Henke.pdf: 2984974 bytes, checksum: a103355c1a7895956d40d4fa9422347a (MD5)
Previous issue date: 2015-03-30 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Electronic messages (emails) are still considered the most significant tools in business
and personal applications due to their low cost and easy access. However, e-mails have
become a major problem owing to the high amount of junk mail, named spam, which fill
the e-mail boxes of users. Among the many problems caused by spam messages, we may
highlight the fact that it is currently the main vector for the spread of malicious activities
such as viruses, worms, trojans, phishing, botnets, among others. Such activities allow
the attacker to have illegal access to penetrating data, trade secrets or to invade the privacy of the sufferers to get some advantage. Several approaches have been proposed to prevent sending unsolicited e-mail messages, such as filters implemented in e-mail servers, spam message classification mechanisms for users to define when particular issue or author is a source of spread of spam and even filters implemented in network electronics. In general, e-mail filter approaches are based on analysis of message content to determine whether or not a message is spam. A major problem with this approach is spam detection in the presence of concept drift. The literature defines concept drift as changes occurring in the concept of data over time, as the change in the features that describe an attack or occurrence of new features. Numerous Intrusion Detection Systems (IDS) use machine learning techniques to monitor the classification error rate in order to detect change. However, when detection occurs, some damage has been caused to the system, a fact that requires updating the classification process and the system operator intervention. To overcome the problems mentioned above, this work proposes a new changing detection method, named Method oriented to the Analysis of the Development of Attacks Characteristics (MECA). The proposed method consists of three steps: 1) classification model training; 2) concept drift detection; and 3) transfer learning. The first step generates classification models as it is commonly conducted in machine learning. The second step introduces two new strategies to avoid concept drift: HFS (Historical-based Features Selection) that analyzes the evolution of the features based on over time historical; and SFS (Similarity-based Features Selection) that analyzes the evolution of the features from the level of similarity obtained between the features vectors of the source and target domains. Finally, the third step focuses on the following questions: what, how and when to transfer acquired knowledge. The answer to the first question is provided by the concept drift detection strategies that identify the new features and store them to be
transferred. To answer the second question, the feature representation transfer approach
is employed. Finally, the transfer of new knowledge is executed as soon as changes that
compromise the classification task performance are identified. The proposed method was developed and validated using two public databases, being one of the datasets built along this thesis. The results of the experiments shown that it is possible to infer a threshold to detect changes in order to ensure the classification model is updated through knowledge transfer. In addition, MECA architecture is able to perform the classification task, as well as the concept drift detection, as two parallel and independent tasks. Finally, MECA uses SVM machine learning algorithm (Support Vector Machines), which is less adherent to the training samples. The results obtained with MECA showed that it is possible to detect changes through feature evolution monitoring before a significant degradation in classification models is achieved. / As mensagens eletrônicas (e-mails) ainda são consideradas as ferramentas de maior prestígio no meio empresarial e pessoal, pois apresentam baixo custo e facilidade de acesso. Por outro lado, os e-mails tornaram-se um grande problema devido à elevada quantidade de mensagens não desejadas, denominadas spam, que lotam as caixas de emails dos usuários. Dentre os diversos problemas causados pelas mensagens spam, destaca-se o fato de ser atualmente o principal vetor de propagação de atividades
maliciosas como vírus, worms, cavalos de Tróia, phishing, botnets, dentre outros. Tais atividades permitem ao atacante acesso indevido a dados sigilosos, segredos de negócios ou mesmo invadir a privacidade das vítimas para obter alguma vantagem. Diversas abordagens, comerciais e acadêmicas, têm sido propostas para impedir o envio de mensagens de e-mails indesejados como filtros implementados nos servidores
de e-mail, mecanismos de classificação de mensagens de spam para que os usuários definam quando determinado assunto ou autor é fonte de propagação de spam e até mesmo filtros implementados em componentes eletrônicos de rede. Em geral, as abordagens de filtros de e-mail são baseadas na análise do conteúdo das mensagens para determinar se tal mensagem é ou não um spam. Um dos maiores problemas com essa abordagem é a deteção de spam na presença de concept drift. A literatura conceitua concept drift como mudanças que ocorrem no conceito dos dados ao longo do tempo como a alteração das características que descrevem um ataque ou ocorrência de novas características. Muitos Sistemas de Deteção de Intrusão (IDS) usam técnicas de aprendizagem de máquina para monitorar a taxa de erro de
classificação no intuito de detetar mudança. Entretanto, quando a deteção ocorre, algum dano já foi causado ao sistema, fato que requer atualização do processo de classificação e a intervenção do operador do sistema. Com o objetivo de minimizar os problemas mencionados acima, esta tese propõe um método de deteção de mudança, denominado Método orientado à Análise da Evolução das Características de Ataques (MECA). O método proposto é composto por três etapas: 1) treino do modelo de classificação; 2) deteção de mudança; e 3) transferência do aprendizado. A primeira etapa emprega modelos de classificação comumente adotados em qualquer método que utiliza aprendizagem de máquina. A segunda etapa apresenta duas novas estratégias para contornar concept drift: HFS (Historical-based Features Selection) que analisa a evolução das características com base no histórico ao longo do tempo; e SFS (Similarity based Features Selection) que observa a evolução das características a partir do nível de similaridade obtido entre os vetores de características dos domínios fonte e alvo. Por fim, a terceira etapa concentra seu objetivo nas seguintes questões: o que, como e quando transferir conhecimento adquirido. A resposta à primeira questão é fornecida pelas estratégias de deteção de mudança, que identificam as novas características e as armazenam para que sejam transferidas. Para responder a segunda questão, a abordagem de transferência de representação de características é adotada. Finalmente, a transferência do novo conhecimento é realizada tão logo mudanças que comprometam o desempenho da tarefa de classificação sejam identificadas. O método MECA foi desenvolvido e validado usando duas bases de dados
públicas, sendo que uma das bases foi construída ao longo desta tese. Os resultados dos experimentos indicaram que é possível inferir um limiar para detetar mudanças a fim de garantir o modelo de classificação sempre atualizado por meio da transferência de conhecimento. Além disso, um diferencial apresentado no método MECA é a possibilidade de executar a tarefa de classificação em paralelo com a deteção de mudança, sendo as duas tarefas independentes. Por fim, o MECA utiliza o algoritmo de aprendizagem de máquina SVM (Support Vector Machines), que é menos aderente às amostras de treinamento. Os resultados obtidos com o MECA mostraram que é possível detetar mudanças por meio da evolução das características antes de ocorrer uma degradação significativa no modelo de classificação utilizado.
|
154 |
Sentiment analysis of Swedish reviews and transfer learning using Convolutional Neural NetworksSundström, Johan January 2018 (has links)
Sentiment analysis is a field within machine learning that focus on determine the contextual polarity of subjective information. It is a technique that can be used to analyze the "voice of the customer" and has been applied with success for the English language for opinionated information such as customer reviews, political opinions and social media data. A major problem regarding machine learning models is that they are domain dependent and will therefore not perform well for other domains. Transfer learning or domain adaption is a research field that study a model's ability of transferring knowledge across domains. In the extreme case a model will train on data from one domain, the source domain, and try to make accurate predictions on data from another domain, the target domain. The deep machine learning model Convolutional Neural Network (CNN) has in recent years gained much attention due to its performance in computer vision both for in-domain classification and transfer learning. It has also performed well for natural language processing problems but has not been investigated to the same extent for transfer learning within this area. The purpose of this thesis has been to investigate how well suited the CNN is for cross-domain sentiment analysis of Swedish reviews. The research has been conducted by investigating how the model perform when trained with data from different domains with varying amount of source and target data. Additionally, the impact on the model’s transferability when using different text representation has also been studied. This study has shown that a CNN without pre-trained word embedding is not that well suited for transfer learning since it performs worse than a traditional logistic regression model. Substituting 20% of source training data with target data can in many of the test cases boost the performance with 7-8% both for the logistic regression and the CNN model. Using pre-trained word embedding produced by a word2vec model increases the CNN's transferability as well as the in-domain performance and outperform the logistic regression model and the CNN model without pre-trained word embedding in the majority of test cases.
|
155 |
Modélisation multi-échelles de la morphologie urbaine à partir de données carroyées de population et de bâti / Multiscale modelling of urban morphology using gridded dataBaro, Johanna 25 March 2015 (has links)
La question des liens entre forme urbaine et transport se trouve depuis une vingtaine d'années au cœur des réflexions sur la mise en place de politiques d'aménagement durable. L'essor de la diffusion de données sur grille régulière constitue dans ce cadre une nouvelle perspective pour la modélisation de structures urbaines à partir de mesures de densités affranchies de toutes les contraintes des maillages administratifs. A partir de données de densité de population et de surface bâtie disponibles à l'échelle de la France sur des grilles à mailles de 200 mètres de côté, nous proposons deux types de classifications adaptées à l'étude des pratiques de déplacement et du développement urbain : des classifications des tissus urbains et des classifications des morphotypes de développement urbain. La construction de telles images classées se base sur une démarche de modélisation théorique et expérimentale soulevant de forts enjeux méthodologiques quant à la classification d'espaces urbains statistiquement variés. Pour nous adapter au traitement exhaustif de ces espaces, nous avons proposé une méthode de classification des tissus urbains par transfert d'apprentissage supervisé. Cette méthode utilise le formalisme des champs de Markov cachés pour prendre en compte les dépendances présentes dans ces données spatialisées. Les classifications en morphotypes sont ensuite obtenus par un enrichissement de ces premières images classées, formalisé à partir de modèles chorématiques et mis à œuvre par raisonnement spatial qualitatif. L'analyse de ces images classées par des méthodes de raisonnement spatial quantitatif et d'analyses factorielles nous a permis de révéler la diversité morphologique de 50 aires urbaines françaises. Elle nous a permis de mettre en avant la pertinence de ces classifications pour caractériser les espaces urbains en accord avec différents enjeux d'aménagement relatifs à la densité ou à la multipolarité / Since a couple of decades the relationships between urban form and travel patterns are central to reflection on sustainable urban planning and transport policy. The increasing distribution of regular grid data is in this context a new perspective for modeling urban structures from measurements of density freed from the constraints of administrative division. Population density data are now available on 200 meters grids covering France. We complete these data with built area densities in order to propose two types of classified images adapted to the study of travel patterns and urban development: classifications of urban fabrics and classifications of morphotypes of urban development. The construction of such classified images is based on theoretical and experimental which raise methodological issues regarding the classification of a statistically various urban spaces. To proceed exhaustively those spaces, we proposed a per-pixel classification method of urban fabrics by supervised transfer learning. Hidden Markov random fields are used to take into account the dependencies in the spatial data. The classifications of morphotypes are then obtained by broadening the knowledge of urban fabrics. These classifications are formalized from chorematique theoretical models and implemented by qualitative spatial reasoning. The analysis of these classifications by methods of quantitative spatial reasoning and factor analysis allowed us to reveal the morphological diversity of 50 metropolitan areas. It highlights the relevance of these classifications to characterize urban areas in accordance with various development issues related to the density or multipolar development
|
156 |
Data-Efficient Reinforcement Learning Control of Robotic Lower-Limb Prosthesis With Human in the LoopJanuary 2020 (has links)
abstract: Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from interacting with the environment. It becomes a natural candidate to replace human prosthetists to customize the control parameters. However, neither traditional RL approaches nor the popular deep RL approaches are readily suitable for learning with limited number of samples and samples with large variations. This dissertation aims to explore new RL based adaptive solutions that are data-efficient for controlling robotic prostheses.
This dissertation begins by proposing a new flexible policy iteration (FPI) framework. To improve sample efficiency, FPI can utilize either on-policy or off-policy learning strategy, can learn from either online or offline data, and can even adopt exiting knowledge of an external critic. Approximate convergence to Bellman optimal solutions are guaranteed under mild conditions. Simulation studies validated that FPI was data efficient compared to several established RL methods. Furthermore, a simplified version of FPI was implemented to learn from offline data, and then the learned policy was successfully tested for tuning the control parameters online on a human subject.
Next, the dissertation discusses RL control with information transfer (RL-IT), or knowledge-guided RL (KG-RL), which is motivated to benefit from transferring knowledge acquired from one subject to another. To explore its feasibility, knowledge was extracted from data measurements of able-bodied (AB) subjects, and transferred to guide Q-learning control for an amputee in OpenSim simulations. This result again demonstrated that data and time efficiency were improved using previous knowledge.
While the present study is new and promising, there are still many open questions to be addressed in future research. To account for human adaption, the learning control objective function may be designed to incorporate human-prosthesis performance feedback such as symmetry, user comfort level and satisfaction, and user energy consumption. To make the RL based control parameter tuning practical in real life, it should be further developed and tested in different use environments, such as from level ground walking to stair ascending or descending, and from walking to running. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
|
157 |
Optimisation d'hyper-paramètres en apprentissage profond et apprentissage par transfert : applications en imagerie médicale / Hyper-parameter optimization in deep learning and transfer learning : applications to medical imagingBertrand, Hadrien 15 January 2019 (has links)
Ces dernières années, l'apprentissage profond a complètement changé le domaine de vision par ordinateur. Plus rapide, donnant de meilleurs résultats, et nécessitant une expertise moindre pour être utilisé que les méthodes classiques de vision par ordinateur, l'apprentissage profond est devenu omniprésent dans tous les problèmes d'imagerie, y compris l'imagerie médicale.Au début de cette thèse, la construction de réseaux de neurones adaptés à des tâches spécifiques ne bénéficiait pas encore de suffisamment d'outils ni d'une compréhension approfondie. Afin de trouver automatiquement des réseaux de neurones adaptés à des tâches spécifiques, nous avons ainsi apporté des contributions à l’optimisation d’hyper-paramètres de réseaux de neurones. Cette thèse propose une comparaison de certaines méthodes d'optimisation, une amélioration en performance d'une de ces méthodes, l'optimisation bayésienne, et une nouvelle méthode d'optimisation d'hyper-paramètres basé sur la combinaison de deux méthodes existantes : l'optimisation bayésienne et hyperband.Une fois équipés de ces outils, nous les avons utilisés pour des problèmes d'imagerie médicale : la classification de champs de vue en IRM, et la segmentation du rein en échographie 3D pour deux groupes de patients. Cette dernière tâche a nécessité le développement d'une nouvelle méthode d'apprentissage par transfert reposant sur la modification du réseau de neurones source par l'ajout de nouvelles couches de transformations géométrique et d'intensité.En dernière partie, cette thèse revient vers les méthodes classiques de vision par ordinateur, et nous proposons un nouvel algorithme de segmentation qui combine les méthodes de déformations de modèles et l'apprentissage profond. Nous montrons comment utiliser un réseau de neurones pour prédire des transformations globales et locales sans accès aux vérités-terrains de ces transformations. Cette méthode est validé sur la tâche de la segmentation du rein en échographie 3D. / In the last few years, deep learning has changed irrevocably the field of computer vision. Faster, giving better results, and requiring a lower degree of expertise to use than traditional computer vision methods, deep learning has become ubiquitous in every imaging application. This includes medical imaging applications. At the beginning of this thesis, there was still a strong lack of tools and understanding of how to build efficient neural networks for specific tasks. Thus this thesis first focused on the topic of hyper-parameter optimization for deep neural networks, i.e. methods for automatically finding efficient neural networks on specific tasks. The thesis includes a comparison of different methods, a performance improvement of one of these methods, Bayesian optimization, and the proposal of a new method of hyper-parameter optimization by combining two existing methods: Bayesian optimization and Hyperband.From there, we used these methods for medical imaging applications such as the classification of field-of-view in MRI, and the segmentation of the kidney in 3D ultrasound images across two populations of patients. This last task required the development of a new transfer learning method based on the modification of the source network by adding new geometric and intensity transformation layers.Finally this thesis loops back to older computer vision methods, and we propose a new segmentation algorithm combining template deformation and deep learning. We show how to use a neural network to predict global and local transformations without requiring the ground-truth of these transformations. The method is validated on the task of kidney segmentation in 3D US images.
|
158 |
Community Recommendation in Social Networks with Sparse DataEmad Rahmaniazad (9725117) 07 January 2021 (has links)
Recommender systems are widely used in many domains. In this work, the importance of a recommender system in an online learning platform is discussed. After explaining the concept of adding an intelligent agent to online education systems, some features of the Course Networking (CN) website are demonstrated. Finally, the relation between CN, the intelligent agent (Rumi), and the recommender system is presented. Along with the argument of three different approaches for building a community recommendation system. The result shows that the Neighboring Collaborative Filtering (NCF) outperforms both the transfer learning method and the Continuous bag-of-words approach. The NCF algorithm has a general format with two various implementations that can be used for other recommendations, such as course, skill, major, and book recommendations.
|
159 |
Self-supervised Representation Learning via Image Out-painting for Medical Image AnalysisJanuary 2020 (has links)
abstract: In recent years, Convolutional Neural Networks (CNNs) have been widely used in not only the computer vision community but also within the medical imaging community. Specifically, the use of pre-trained CNNs on large-scale datasets (e.g., ImageNet) via transfer learning for a variety of medical imaging applications, has become the de facto standard within both communities.
However, to fit the current paradigm, 3D imaging tasks have to be reformulated and solved in 2D, losing rich 3D contextual information. Moreover, pre-trained models on natural images never see any biomedical images and do not have knowledge about anatomical structures present in medical images. To overcome the above limitations, this thesis proposes an image out-painting self-supervised proxy task to develop pre-trained models directly from medical images without utilizing systematic annotations. The idea is to randomly mask an image and train the model to predict the missing region. It is demonstrated that by predicting missing anatomical structures when seeing only parts of the image, the model will learn generic representation yielding better performance on various medical imaging applications via transfer learning.
The extensive experiments demonstrate that the proposed proxy task outperforms training from scratch in six out of seven medical imaging applications covering 2D and 3D classification and segmentation. Moreover, image out-painting proxy task offers competitive performance to state-of-the-art models pre-trained on ImageNet and other self-supervised baselines such as in-painting. Owing to its outstanding performance, out-painting is utilized as one of the self-supervised proxy tasks to provide generic 3D pre-trained models for medical image analysis. / Dissertation/Thesis / Masters Thesis Computer Science 2020
|
160 |
Reinforcement Learning for Control of a Multi-Input, Multi-Output Model of the Human ArmCrowder, Douglas Cale 01 September 2021 (has links)
No description available.
|
Page generated in 0.0869 seconds