• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 234
  • 10
  • 10
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 314
  • 314
  • 141
  • 119
  • 112
  • 94
  • 72
  • 63
  • 60
  • 57
  • 56
  • 54
  • 51
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Animal ID Tag Recognition with Convolutional and Recurrent Neural Network : Identifying digits from a number sequence with RCNN

Hijazi, Issa, Pettersson, Pontus January 2019 (has links)
Major advances in machine learning have made image recognition applications, with Artificial Neural Network, blossom over the recent years. The aim of this thesis was to find a solution to recognize digits from a number sequence on an ID tag, used to identify farm animals, with the help of image recognition. A Recurrent Convolutional Neural Network solution called PPNet was proposed and tested on a data set called Animal Identification Tags. A transfer learning method was also used to test if it could help PPNet generalize and better recognize digits. PPNet was then compared against Microsoft Azures own image recognition API, to determine how PPNet compares to a general solution. PPNet, while not performing as good, still managed to achieve competitive results to the Azure API.
32

Exploring Transfer Learning via Convolutional Neural Networks for Image Classification and Super-Resolution

Ribeiro, Eduardo Ferreira 22 March 2018 (has links)
This work presents my research about the use of Convolutional Neural Network (CNNs) for transfer learning through its application for colonic polyp classification and iris super-resolution. Traditionally, machine learning methods use the same feature space and the same distribution for training and testing the tools. Several problems in this approach can emerge as, for example, when the number of samples for training (especially in a supervised training) is limited. In the medical field, this problem is recurrent mainly because obtaining a database large enough with appropriate annotations for training is highly costly and may become impractical. Another problem relates to the distribution of textural features in a image database which may be too large such as the texture patterns of the human iris. In this case a single and specific training database might not get enough generalization to be applied to the entire domain. In this work we explore the use of texture transfer learning to surpass these problems for two applications: colonic polyp classification and iris super-resolution. The leading cause of deaths related to intestinal tract is the development of cancer cells (polyps) in its many parts. An early detection (when the cancer is still at an early stage) can reduce the risk of mortality among these patients. More specifically, colonic polyps (benign tumors or growths which arise on the inner colon surface) have a high occurrence and are known to be precursors of colon cancer development. Several studies have shown that automatic detection and classification of image regions which may contain polyps within the colon can be used to assist specialists in order to decrease the polyp miss rate. However, the classification can be a difficult task due to several factors such as the lack or excess of illumination, the blurring due to movement or water injection and the different appearances of polyps. Also, to find a robust and a global feature extractor that summarizes and represents all these pit-patterns structures in a single vector is very difficult and Deep Learning can be a good alternative to surpass these problems. One of the goals of this work is show the effectiveness of CNNs trained from scratch for colonic polyp classification besides the capability of knowledge transfer between natural images and medical images using off-the-shelf pretrained CNNs for colonic polyp classification. In this case, the CNN will project the target database samples into a vector space where the classes are more likely to be separable. The second part of this work dedicates to the transfer learning for iris super-resolution. The main goal of Super-Resolution (SR) is to produce, from one or more images, an image with a higher resolution (with more pixels) at the same time that produces a more detailed and realistic image being faithful to the low resolution image(s). Currently, most iris recognition systems require the user to present their iris for the sensor at a close distance. However, at present, there is a constant pressure to make that relaxed conditions of acquisitions in such systems could be allowed. In this work we show that the use of deep learning and transfer learning for single image super resolution applied to iris recognition can be an alternative for Iris Recognition of low resolution images. For this purpose, we explore if the nature of the images as well as if the pattern from the iris can influence the CNN transfer learning and, consequently, the results in the recognition process. / Diese Arbeit pr¨asentiert meine Forschung hinsichtlich der Verwendung von ”Transfer-Learning” (TL) in Kombination mit Convolutional Neural Networks (CNNs), um dadurch die Klassifikation von Dickdarmpolypen und die Qualit¨at von Iris Bildern (”Iris-Super-Resolution”) zu verbessern. Herk¨ommlicherweise verwenden Verfahren des maschinellen Lernens den gleichen Merkmalsraum und die gleiche Verteilung zum Trainieren und Testen der abgewendeten Methoden. Mehrere Probleme k¨onnen bei diesem Ansatz jedoch auftreten. Zum Beispiel ist es m¨ oglich, dass die Anzahl der zu trainierenden Daten (insbesondere in einem ”supervised training” Szenario) begrenzt ist. Im Speziellen im medizinischen Anwendungsfall ist man regelm¨aßig mit dem angesprochenen Problem konfrontiert, da die Zusammenstellung einer Datenbank, welche ¨ uber eine geeignete Anzahl an verwendbaren Daten verf ¨ ugt, entweder sehr kostspielig ist und/oder sich als ¨ uber die Maßen zeitaufw¨andig herausstellt. Ein anderes Problem betrifft die Verteilung von Strukturmerkmalen in einer Bilddatenbank, die zu groß sein kann, wie es im Fall der Verwendung von Texturmustern der menschlichen Iris auftritt. Dies kann zu dem Umstand f ¨ uhren, dass eine einzelne und sehr spezifische Trainingsdatenbank m¨oglicherweise nicht ausreichend verallgemeinert wird, um sie auf die gesamte betrachtete Dom¨ane anzuwenden. In dieser Arbeit wird die Verwendung von TL auf diverse Texturen untersucht, um die zuvor angesprochenen Probleme f ¨ ur zwei Anwendungen zu ¨ uberwinden: in der Klassifikation von Dickdarmpolypen und in Iris Super-Resolution. Die Hauptursache f ¨ ur Todesf¨alle im Zusammenhang mit dem Darmtrakt ist die Entwicklung von Krebszellen (Polypen) in vielen unterschiedlichen Auspr¨agungen. Eine Fr ¨uherkennung kann das Mortalit¨atsrisiko bei Patienten verringern, wenn sich der Krebs noch in einem fr ¨uhen Stadium befindet. Genauer gesagt, Dickdarmpolypen (gutartige Tumore oder Wucherungen, die an der inneren Dickdarmoberfl¨ache entstehen) haben ein hohes Vorkommen und sind bekanntermaßen Vorl¨aufer von Darmkrebsentwicklung. Mehrere Studien haben gezeigt, dass die automatische Erkennung und Klassifizierung von Bildregionen, die Polypen innerhalb des Dickdarms m¨oglicherweise enthalten, verwendet werden k¨onnen, um Spezialisten zu helfen, die Fehlerrate bei Polypen zu verringern. Die Klassifizierung kann sich jedoch aufgrund mehrerer Faktoren als eine schwierige Aufgabe herausstellen. ZumBeispiel kann das Fehlen oder ein U¨ bermaß an Beleuchtung zu starken Problemen hinsichtlich der Kontrastinformation der Bilder f ¨ uhren, wohingegen Unsch¨arfe aufgrund von Bewegung/Wassereinspritzung die Qualit¨at des Bildmaterials ebenfalls verschlechtert. Daten, welche ein unterschiedlich starkes Auftreten von Polypen repr¨asentieren, bieten auch dieM¨oglichkeit zu einer Reduktion der Klassifizierungsgenauigkeit. Weiters ist es sehr schwierig, einen robusten und vor allem globalen Feature-Extraktor zu finden, der all die notwendigen Pit-Pattern-Strukturen in einem einzigen Vektor zusammenfasst und darstellt. Um mit diesen Problemen ad¨aquat umzugehen, kann die Anwendung von CNNs eine gute Alternative bieten. Eines der Ziele dieser Arbeit ist es, die Wirksamkeit von CNNs, die von Grund auf f ¨ ur die Klassifikation von Dickdarmpolypen konstruiert wurden, zu zeigen. Des Weiteren soll die Anwendung von TL unter der Verwendung vorgefertigter CNNs f ¨ ur die Klassifikation von Dickdarmpolypen untersucht werden. Hierbei wird zus¨atzliche Information von nichtmedizinischen Bildern hinzugezogen und mit den verwendeten medizinischen Daten verbunden: Information wird also transferiert - TL entsteht. Auch in diesem Fall projiziert das CNN iii die Zieldatenbank (die Polypenbilder) in einen vorher trainierten Vektorraum, in dem die zu separierenden Klassen dann eher trennbar sind, daWissen aus den nicht-medizinischen Bildern einfließt. Der zweite Teil dieser Arbeit widmet sich dem TL hinsichtlich der Verbesserung der Bildqualit¨at von Iris Bilder - ”Iris- Super-Resolution”. Das Hauptziel von Super-Resolution (SR) ist es, aus einem oder mehreren Bildern gleichzeitig ein Bild mit einer h¨oheren Aufl¨osung (mit mehr Pixeln) zu erzeugen, welches dadurch zu einem detaillierteren und somit realistischeren Bild wird, wobei der visuelle Bildinhalt unver¨andert bleibt. Gegenw¨artig fordern die meisten Iris- Erkennungssysteme, dass der Benutzer seine Iris f ¨ ur den Sensor in geringer Entfernung pr¨asentiert. Jedoch ist es ein Anliegen der Industrie die bisher notwendigen Bedingungen - kurzer Abstand zwischen Sensor und Iris, sowie Verwendung von sehr teuren hochqualitativen Sensoren - zu ver¨andern. Diese Ver¨anderung betrifft einerseits die Verwendung von billigeren Sensoren und andererseits die Vergr¨oßerung des Abstandes zwischen Iris und Sensor. Beide Anpassungen f ¨ uhren zu Reduktion der Bildqualit¨at, was sich direkt auf die Erkennungsgenauigkeit der aktuell verwendeten Iris- erkennungssysteme auswirkt. In dieser Arbeit zeigen wir, dass die Verwendung von CNNs und TL f ¨ ur die ”Single Image Super-Resolution”, die bei der Iriserkennung angewendet wird, eine Alternative f ¨ ur die Iriserkennung von Bildern mit niedriger Aufl¨osung sein kann. Zu diesem Zweck untersuchen wir, ob die Art der Bilder sowie das Muster der Iris das CNN-TL beeinflusst und folglich die Ergebnisse im Erkennungsprozess ver¨andern kann.
33

Seleção de abstração espacial no Aprendizado por Reforço avaliando o processo de aprendizagem / Selection of spatial abstraction in Reinforcement Learning by learning process evaluating

Silva, Cleiton Alves da 14 June 2017 (has links)
Agentes que utilizam técnicas de Aprendizado por Reforço (AR) buscam resolver problemas que envolvem decisões sequenciais em ambientes estocásticos sem conhecimento a priori. O processo de aprendizado desenvolvido pelo agente em geral é lento, visto que se concretiza por tentativa e erro e exige repetidas interações com cada estado do ambiente e como o estado do ambiente é representado por vários fatores, a quantidade de estados cresce exponencialmente de acordo com o número de variáveis de estado. Uma das técnicas para acelerar o processo de aprendizado é a generalização de conhecimento, que visa melhorar o processo de aprendizado, seja no mesmo problema por meio da abstração, ao explorar a similaridade entre estados semelhantes ou em diferentes problemas, ao transferir o conhecimento adquirido de um problema fonte para acelerar a aprendizagem em um problema alvo. Uma abstração considera partes do estado e, ainda que uma única não seja suficiente, é necessário descobrir qual combinação de abstrações pode atingir bons resultados. Nesta dissertação é proposto um método para seleção de abstração, considerando o processo de avaliação da aprendizagem durante o aprendizado. A contribuição é formalizada pela apresentação do algoritmo REPO, utilizado para selecionar e avaliar subconjuntos de abstrações. O algoritmo é iterativo e a cada rodada avalia novos subconjuntos de abstrações, conferindo uma pontuação para cada uma das abstrações existentes no subconjunto e por fim, retorna o subconjunto com as abstrações melhores pontuadas. Experimentos com o simulador de futebol mostram que esse método é efetivo e consegue encontrar um subconjunto com uma quantidade menor de abstrações que represente o problema original, proporcionando melhoria em relação ao desempenho do agente em seu aprendizado / Agents that use Reinforcement Learning (RL) techniques seek to solve problems that involve sequential decisions in stochastic environments without a priori knowledge. The learning process developed by the agent in general is slow, since it is done by trial and error and requires repeated iterations with each state of the environment and because the state of the environment is represented by several factors, the number of states grows exponentially according to the number of state variables. One of the techniques to accelerate the learning process is the generalization of knowledge, which aims to improve the learning process, be the same problem through abstraction, explore the similarity between similar states or different problems, transferring the knowledge acquired from A source problem to accelerate learning in a target problem. An abstraction considers parts of the state, and although a single one is not sufficient, it is necessary to find out which combination of abstractions can achieve good results. In this work, a method for abstraction selection is proposed, considering the evaluation process of learning during learning. The contribution is formalized by the presentation of the REPO algorithm, used to select and evaluate subsets of features. The algorithm is iterative and each round evaluates new subsets of features, giving a score for each of the features in the subset, and finally, returns the subset with the most highly punctuated features. Experiments with the soccer simulator show that this method is effective and can find a subset with a smaller number of features that represents the original problem, providing improvement in relation to the performance of the agent in its learning
34

Efficient supervision for robot learning via imitation, simulation, and adaptation

Wulfmeier, Markus January 2018 (has links)
In order to enable more widespread application of robots, we are required to reduce the human effort for the introduction of existing robotic platforms to new environments and tasks. In this thesis, we identify three complementary strategies to address this challenge, via the use of imitation learning, domain adaptation, and transfer learning based on simulations. The overall work strives to reduce the effort of generating training data by employing inexpensively obtainable labels and by transferring information between different domains with deviating underlying properties. Imitation learning enables a straightforward way for untrained personnel to teach robots to perform tasks by providing demonstrations, which represent a comparably inexpensive source of supervision. We develop a scalable approach to identify the preferences underlying demonstration data via the framework of inverse reinforcement learning. The method enables integration of the extracted preferences as cost maps into existing motion planning systems. We further incorporate prior domain knowledge and demonstrate that the approach outperforms the baselines including manually crafted cost functions. In addition to employing low-cost labels from demonstration, we investigate the adaptation of models to domains without available supervisory information. Specifically, the challenge of appearance changes in outdoor robotics such as illumination and weather shifts is addressed using an adversarial domain adaptation approach. A principal advantage of the method over prior work is the straightforwardness of adapting arbitrary, state-of-the-art neural network architectures. Finally, we demonstrate performance benefits of the method for semantic segmentation of drivable terrain. Our last contribution focuses on simulation to real world transfer learning, where the characteristic differences are not only regarding the visual appearance but the underlying system dynamics. Our work aims at parallel training in both systems and mutual guidance via auxiliary alignment rewards to accelerate training for real world systems. The approach is shown to outperform various baselines as well as a unilateral alignment variant.
35

Adapting deep neural networks as models of human visual perception

McClure, Patrick January 2018 (has links)
Deep neural networks (DNNs) have recently been used to solve complex perceptual and decision tasks. In particular, convolutional neural networks (CNN) have been extremely successful for visual perception. In addition to performing well on the trained object recognition task, these CNNs also model brain data throughout the visual hierarchy better than previous models. However, these DNNs are still far from completely explaining visual perception in the human brain. In this thesis, we investigated two methods with the goal of improving DNNs’ capabilities to model human visual perception: (1) deep representational distance learning (RDL), a method for driving representational spaces in deep nets into alignment with other (e.g. brain) representational spaces and (2) variational DNNs that use sampling to perform approximate Bayesian inference. In the first investigation, RDL successfully transferred information from a teacher model to a student DNN. This was achieved by driving the student DNN’s representational distance matrix (RDM), which characterises the representational geometry, into alignment with that of the teacher. This led to a significant increase in test accuracy on machine learning benchmarks. In the future, we plan to use this method to simultaneously train DNNs to perform complex tasks and to predict neural data. In the second investigation, we showed that sampling during learning and inference using simple Bernoulli- and Gaussian-based noise improved a CNN’s representation of its own uncertainty for object recognition. We also found that sampling during learning and inference with Gaussian noise improved how well CNNs predict human behavioural data for image classification. While these methods alone do not fully explain human vision, they allow for training CNNs that better model several features of human visual perception.
36

Self Exploration of Sensorimotor Spaces in Robots. / L’auto-exploration des espaces sensorimoteurs chez les robots

Benureau, Fabien 18 May 2015 (has links)
La robotique développementale a entrepris, au courant des quinze dernières années,d’étudier les processus développementaux, similaires à ceux des systèmes biologiques,chez les robots. Le but est de créer des robots qui ont une enfance—qui rampent avant d’essayer de courir, qui jouent avant de travailler—et qui basent leurs décisions sur l’expérience de toute une vie, incarnés dans le monde réel.Dans ce contexte, cette thèse étudie l’exploration sensorimotrice—la découverte pour un robot de son propre corps et de son environnement proche—pendant les premiers stage du développement, lorsque qu’aucune expérience préalable du monde n’est disponible. Plus spécifiquement, cette thèse se penche sur comment générer une diversité d’effets dans un environnement inconnu. Cette approche se distingue par son absence de fonction de récompense ou de fitness définie par un expert, la rendant particulièrement apte à être intégrée sur des robots auto-suffisants.Dans une première partie, l’approche est motivée et le problème de l’exploration est formalisé, avec la définition de mesures quantitatives pour évaluer le comportement des algorithmes et d’un cadre architectural pour la création de ces derniers. Via l’examen détaillé de l’exemple d’un bras robot à multiple degrés de liberté, la thèse explore quelques unes des problématiques fondamentales que l’exploration sensorimotrice pose, comme la haute dimensionnalité et la redondance sensorimotrice. Cela est fait en particulier via la comparaison entre deux stratégies d’exploration: le babillage moteur et le babillage dirigé par les objectifs. Plusieurs algorithmes sont proposés tour à tour et leur comportement est évalué empiriquement, étudiant les interactions qui naissent avec les contraintes développementales, les démonstrations externes et les synergies motrices. De plus, parce que même des algorithmes efficaces peuvent se révéler terriblement inefficaces lorsque leurs capacités d’apprentissage ne sont pas adaptés aux caractéristiques de leur environnement, une architecture est proposée qui peut dynamiquement choisir la stratégie d’exploration la plus adaptée parmi un ensemble de stratégies. Mais même avec de bons algorithmes, l’exploration sensorimotrice reste une entreprise coûteuse—un problème important, étant donné que les robots font face à des contraintes fortes sur la quantité de données qu’ils peuvent extraire de leur environnement;chaque observation prenant un temps non-négligeable à récupérer. [...] À travers cette thèse, les contributions les plus importantes sont les descriptions algorithmiques et les résultats expérimentaux. De manière à permettre la reproduction et la réexamination sans contrainte de tous les résultats, l’ensemble du code est mis à disposition. L’exploration sensorimotrice est un mécanisme fondamental du développement des systèmes biologiques. La séparer délibérément des mécanismes d’apprentissage et l’étudier pour elle-même dans cette thèse permet d’éclairer des problèmes importants que les robots se développant seuls seront amenés à affronter. / Developmental robotics has begun in the last fifteen years to study robots that havea childhood—crawling before trying to run, playing before being useful—and that are basing their decisions upon a lifelong and embodied experience of the real-world. In this context, this thesis studies sensorimotor exploration—the discovery of a robot’s own body and proximal environment—during the early developmental stages, when no prior experience of the world is available. Specifically, we investigate how to generate a diversity of effects in an unknown environment. This approach distinguishes itself by its lack of user-defined reward or fitness function, making it especially suited for integration in self-sufficient platforms. In a first part, we motivate our approach, formalize the exploration problem, define quantitative measures to assess performance, and propose an architectural framework to devise algorithms. through the extensive examination of a multi-joint arm example, we explore some of the fundamental challenges that sensorimotor exploration faces, such as high-dimensionality and sensorimotor redundancy, in particular through a comparison between motor and goal babbling exploration strategies. We propose several algorithms and empirically study their behaviour, investigating the interactions with developmental constraints, external demonstrations and biologicallyinspired motor synergies. Furthermore, because even efficient algorithms can provide disastrous performance when their learning abilities do not align with the environment’s characteristics, we propose an architecture that can dynamically discriminate among a set of exploration strategies. Even with good algorithms, sensorimotor exploration is still an expensive proposition— a problem since robots inherently face constraints on the amount of data they are able to gather; each observation takes a non-negligible time to collect. [...] Throughout this thesis, our core contributions are algorithms description and empirical results. In order to allow unrestricted examination and reproduction of all our results, the entire code is made available. Sensorimotor exploration is a fundamental developmental mechanism of biological systems. By decoupling it from learning and studying it in its own right in this thesis, we engage in an approach that casts light on important problems facing robots developing on their own.
37

Seleção de abstração espacial no Aprendizado por Reforço avaliando o processo de aprendizagem / Selection of spatial abstraction in Reinforcement Learning by learning process evaluating

Cleiton Alves da Silva 14 June 2017 (has links)
Agentes que utilizam técnicas de Aprendizado por Reforço (AR) buscam resolver problemas que envolvem decisões sequenciais em ambientes estocásticos sem conhecimento a priori. O processo de aprendizado desenvolvido pelo agente em geral é lento, visto que se concretiza por tentativa e erro e exige repetidas interações com cada estado do ambiente e como o estado do ambiente é representado por vários fatores, a quantidade de estados cresce exponencialmente de acordo com o número de variáveis de estado. Uma das técnicas para acelerar o processo de aprendizado é a generalização de conhecimento, que visa melhorar o processo de aprendizado, seja no mesmo problema por meio da abstração, ao explorar a similaridade entre estados semelhantes ou em diferentes problemas, ao transferir o conhecimento adquirido de um problema fonte para acelerar a aprendizagem em um problema alvo. Uma abstração considera partes do estado e, ainda que uma única não seja suficiente, é necessário descobrir qual combinação de abstrações pode atingir bons resultados. Nesta dissertação é proposto um método para seleção de abstração, considerando o processo de avaliação da aprendizagem durante o aprendizado. A contribuição é formalizada pela apresentação do algoritmo REPO, utilizado para selecionar e avaliar subconjuntos de abstrações. O algoritmo é iterativo e a cada rodada avalia novos subconjuntos de abstrações, conferindo uma pontuação para cada uma das abstrações existentes no subconjunto e por fim, retorna o subconjunto com as abstrações melhores pontuadas. Experimentos com o simulador de futebol mostram que esse método é efetivo e consegue encontrar um subconjunto com uma quantidade menor de abstrações que represente o problema original, proporcionando melhoria em relação ao desempenho do agente em seu aprendizado / Agents that use Reinforcement Learning (RL) techniques seek to solve problems that involve sequential decisions in stochastic environments without a priori knowledge. The learning process developed by the agent in general is slow, since it is done by trial and error and requires repeated iterations with each state of the environment and because the state of the environment is represented by several factors, the number of states grows exponentially according to the number of state variables. One of the techniques to accelerate the learning process is the generalization of knowledge, which aims to improve the learning process, be the same problem through abstraction, explore the similarity between similar states or different problems, transferring the knowledge acquired from A source problem to accelerate learning in a target problem. An abstraction considers parts of the state, and although a single one is not sufficient, it is necessary to find out which combination of abstractions can achieve good results. In this work, a method for abstraction selection is proposed, considering the evaluation process of learning during learning. The contribution is formalized by the presentation of the REPO algorithm, used to select and evaluate subsets of features. The algorithm is iterative and each round evaluates new subsets of features, giving a score for each of the features in the subset, and finally, returns the subset with the most highly punctuated features. Experiments with the soccer simulator show that this method is effective and can find a subset with a smaller number of features that represents the original problem, providing improvement in relation to the performance of the agent in its learning
38

New Statistical Transfer Learning Models for Health Care Applications

January 2018 (has links)
abstract: Transfer learning is a sub-field of statistical modeling and machine learning. It refers to methods that integrate the knowledge of other domains (called source domains) and the data of the target domain in a mathematically rigorous and intelligent way, to develop a better model for the target domain than a model using the data of the target domain alone. While transfer learning is a promising approach in various application domains, my dissertation research focuses on the particular application in health care, including telemonitoring of Parkinson’s Disease (PD) and radiomics for glioblastoma. The first topic is a Mixed Effects Transfer Learning (METL) model that can flexibly incorporate mixed effects and a general-form covariance matrix to better account for similarity and heterogeneity across subjects. I further develop computationally efficient procedures to handle unknown parameters and large covariance structures. Domain relations, such as domain similarity and domain covariance structure, are automatically quantified in the estimation steps. I demonstrate METL in an application of smartphone-based telemonitoring of PD. The second topic focuses on an MRI-based transfer learning algorithm for non-invasive surgical guidance of glioblastoma patients. Limited biopsy samples per patient create a challenge to build a patient-specific model for glioblastoma. A transfer learning framework helps to leverage other patient’s knowledge for building a better predictive model. When modeling a target patient, not every patient’s information is helpful. Deciding the subset of other patients from which to transfer information to the modeling of the target patient is an important task to build an accurate predictive model. I define the subset of “transferrable” patients as those who have a positive rCBV-cell density correlation, because a positive correlation is confirmed by imaging theory and the its respective literature. The last topic is a Privacy-Preserving Positive Transfer Learning (P3TL) model. Although negative transfer has been recognized as an important issue by the transfer learning research community, there is a lack of theoretical studies in evaluating the risk of negative transfer for a transfer learning method and identifying what causes the negative transfer. My work addresses this issue. Driven by the theoretical insights, I extend Bayesian Parameter Transfer (BPT) to a new method, i.e., P3TL. The unique features of P3TL include intelligent selection of patients to transfer in order to avoid negative transfer and maintain patient privacy. These features make P3TL an excellent model for telemonitoring of PD using an At-Home Testing Device. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2018
39

Sentiment analysis and transfer learning using recurrent neural networks : an investigation of the power of transfer learning / Sentimentanalys och överföringslärande med neuronnät

Pettersson, Harald January 2019 (has links)
In the field of data mining, transfer learning is the method of transferring knowledge from one domain into another. Using reviews from prisjakt.se, a Swedish price comparison site, and hotels.com this work investigate how the similarities between domains affect the results of transfer learning when using recurrent neural networks. We test several different domains with different characteristics, e.g. size and lexical similarity. In this work only relatively similar domains were used, the same target function was sought and all reviews were in Swedish. Regardless, the results are conclusive; transfer learning is often beneficial, but is highly dependent on the features of the domains and how they compare with each other’s.
40

Human Activity Recognition Based on Transfer Learning

Pang, Jinyong 06 July 2018 (has links)
Human activity recognition (HAR) based on time series data is the problem of classifying various patterns. Its widely applications in health care owns huge commercial benefit. With the increasing spread of smart devices, people have strong desires of customizing services or product adaptive to their features. Deep learning models could handle HAR tasks with a satisfied result. However, training a deep learning model has to consume lots of time and computation resource. Consequently, developing a HAR system effectively becomes a challenging task. In this study, we develop a solid HAR system using Convolutional Neural Network based on transfer learning, which can eliminate those barriers.

Page generated in 0.1026 seconds