Spelling suggestions: "subject:"transferlearning"" "subject:"transferleading""
61 |
Ichthyoplankton Classification Tool using Generative Adversarial Networks and Transfer LearningAljaafari, Nura 15 April 2018 (has links)
The study and the analysis of marine ecosystems is a significant part of the marine science research. These systems are valuable resources for fisheries, improving water quality and can even be used in drugs production. The investigation of ichthyoplankton inhabiting these ecosystems is also an important research field. Ichthyoplankton are fish in their early stages of life. In this stage, the fish have relatively similar shape and are small in size. The currently used way of identifying them is not optimal. Marine scientists typically study such organisms by sending a team that collects samples from the sea which is then taken to the lab for further investigation. These samples need to be studied by an expert and usually end needing a DNA sequencing. This method is time-consuming and requires a high level of experience. The recent advances in AI have helped to solve and automate several difficult tasks which motivated us to develop a classification tool for ichthyoplankton. We show that using machine learning techniques, such as generative adversarial networks combined with transfer learning solves such a problem with high accuracy. We show that using traditional machine learning algorithms fails to solve it. We also give a general framework for creating a classification tool when the dataset used for training is a limited dataset. We aim to build a user-friendly tool that can be used by any user for the classification task and we aim to give a guide to the researchers so that they can follow in creating a classification tool.
|
62 |
A Study on Resolution and Retrieval of Implicit Entity References in Microblogs / マイクロブログにおける暗黙的な実体参照の解決および検索に関する研究Lu, Jun-Li 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22580号 / 情博第717号 / 新制||情||123(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 吉川 正俊, 教授 黒橋 禎夫, 教授 田島 敬史, 教授 田中 克己(京都大学 名誉教授) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
63 |
ACCURATE DETECTION OF SELECTIVE SWEEPS WITH TRANSFER LEARNINGUnknown Date (has links)
Positive natural selection leaves detectable, distinctive patterns in the genome in the form of a selective sweep. Identifying areas of the genome that have undergone selective sweeps is an area of high interest as it enables understanding of species and population evolution. Previous work has accomplished this by evaluating patterns within summary statistics computed across the genome and through application of machine learning techniques to raw population genomic data. When using raw population genomic data, convolutional neural networks have most recently been employed as they can handle large input arrays and maintain correlations among elements. Yet, such models often require massive amounts of training data and can be computationally expensive to train for a given problem. Instead, transfer learning has recently been used in the image analysis literature to improve machine learning models by learning the important features of images from large unrelated datasets beforehand, and then refining these models through subsequent application on smaller and more relevant datasets. We combine transfer learning with convolutional neural networks to improve classification of selective sweeps from raw population genomic data. We show that the combination of transfer learning with convolutional neural networks allows for accurate classification of selective sweeps. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2021. / FAU Electronic Theses and Dissertations Collection
|
64 |
Deep Learning for Classification of COVID-19 Pneumonia, Bacterial Pneumonia, Viral Pneumonia and Normal Lungs on CT ImagesDesai, Gargi Sharad 05 October 2021 (has links)
No description available.
|
65 |
Improve the Diagnosis on Fundus Photography with Deep Transfer LearningGuo, Chen 21 June 2021 (has links)
No description available.
|
66 |
Domain-Aware Continual Zero-Shot LearningYi, Kai 29 November 2021 (has links)
We introduce Domain Aware Continual Zero-Shot Learning (DACZSL), the task of visually recognizing images of unseen categories in unseen domains sequentially. We created DACZSL on top of the DomainNet dataset by dividing it into a sequence of tasks, where classes are incrementally provided on seen domains during training and evaluation is conducted on unseen domains for both seen and unseen classes. We also proposed a novel Domain-Invariant CZSL Network (DIN), which outperforms state-of-the-art baseline models that we adapted to DACZSL setting. We adopt a structure-based approach to alleviate forgetting knowledge from previous tasks with a small per-task private network in addition to a global shared network. To encourage the private network to capture the domain and task-specific representation, we train our model with a novel adversarial knowledge disentanglement setting to make our global network task-invariant and domain-invariant over all the tasks. Our method also learns a class-wise learnable prompt to obtain better class-level text representation, which is used to represent side information to enable zero-shot prediction of future unseen classes. Our code and benchmarks are made available at https://zero-shot-learning.github.io/daczsl.
|
67 |
A Transfer Learning Approach to Object Detection Acceleration for Embedded ApplicationsVance, Lauren M. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Deep learning solutions to computer vision tasks have revolutionized many industries in recent years, but embedded systems have too many restrictions to take advantage of current state-of-the-art configurations. Typical embedded processor hardware configurations must meet very low power and memory constraints to maintain small and lightweight packaging, and the architectures of the current best deep learning models are too computationally-intensive for these hardware configurations. Current research shows that convolutional neural networks (CNNs) can be deployed with a few architectural modifications on Field-Programmable Gate Arrays (FPGAs) resulting in minimal loss of accuracy, similar or decreased processing speeds, and lower power consumption when compared to general-purpose Central Processing Units (CPUs) and Graphics Processing Units (GPUs). This research contributes further to these findings with the FPGA implementation of a YOLOv4 object detection model that was developed with the use of transfer learning. The transfer-learned model uses the weights of a model pre-trained on the MS-COCO dataset as a starting point then fine-tunes only the output layers for detection on more specific objects of five classes. The model architecture was then modified slightly for compatibility with the FPGA hardware using techniques such as weight quantization and replacing unsupported activation layer types. The model was deployed on three different hardware setups (CPU, GPU, FPGA) for inference on a test set of 100 images. It was found that the FPGA was able to achieve real-time inference speeds of 33.77 frames-per-second, a speedup of 7.74 frames-per-second when compared to GPU deployment. The model also consumed 96% less power than a GPU configuration with only approximately 4% average loss in accuracy across all 5 classes. The results are even more striking when compared to CPU deployment, with 131.7-times speedup in inference throughput. CPUs have long since been outperformed by GPUs for deep learning applications but are used in most embedded systems. These results further illustrate the advantages of FPGAs for deep learning inference on embedded systems even when transfer learning is used for an efficient end-to-end deployment process. This work advances current state-of-the-art with the implementation of a YOLOv4 object detection model developed with transfer learning for FPGA deployment.
|
68 |
Solving Arabic Math Word Problems via Deep LearningAlghamdi, Reem A. 14 November 2021 (has links)
This thesis studies to automatically solve Arabic Math Word Problems (MWPs) by deep learning models. MWP is a text description of a mathematical problem, which should be solved by deriving a math equation and reach the answer. Due to their strong learning capacity, deep learning based models can learn from the given problem description and generate the correct math equation for solving the problem. Effective models have been developed for solving MWPs in English and Chinese. However, Arabic MWPs are rarely studied. To initiate the study in Arabic MWPs, this thesis contributes the first large-scale dataset for Arabic MWPs, which contain 6,000 samples. Each sample is composed of an Arabic MWP description and the corresponding equation to solve this MWP. Arabic MWP solvers are then built with deep learning models, and verified on this dataset for their effectiveness. In addition, a transfer learning model is built to let the high-resource Chinese MWP solver to promote the performance of the low-resource Arabic MWP solver. This work is the first to use deep learning methods to solve Arabic MWP and the first to use transfer learning to solve MWP across different languages. The solver enhanced by transfer learning has accuracy 74.15%, which is 3% higher than the baseline that does not use transfer learning. In addition, the accuracy is more than 7% higher than the baseline for templates with few samples representing them. Furthermore, The model can generate new sequences that were not seen before during the training with an accuracy of 27% (11% higher than the baseline).
|
69 |
Reducing the Manual Annotation Effort for Handwriting Recognition Using Active Transfer LearningBurdett, Eric 23 August 2021 (has links)
Handwriting recognition systems have achieved remarkable performance over the past several years with the advent of deep neural networks. For high-quality recognition, these models require large amounts of labeled training data, which can be difficult to obtain. Various methods to reduce this effort have been proposed in the realms of active and transfer learning, but not in combination. We propose a framework for fitting new handwriting recognition models that joins active and transfer learning into a unified framework. Empirical results show the superiority of our method compared to traditional active learning, transfer learning, or standard supervised training schemes.
|
70 |
Novel Deep Learning Models for Medical Imaging AnalysisJanuary 2019 (has links)
abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2019
|
Page generated in 0.0896 seconds