Spelling suggestions: "subject:"deep transfer learning"" "subject:"keep transfer learning""
1 |
A Study on Resolution and Retrieval of Implicit Entity References in Microblogs / マイクロブログにおける暗黙的な実体参照の解決および検索に関する研究Lu, Jun-Li 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22580号 / 情博第717号 / 新制||情||123(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 吉川 正俊, 教授 黒橋 禎夫, 教授 田島 敬史, 教授 田中 克己(京都大学 名誉教授) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
2 |
Learning Transferable Features for Diagnosis of Breast Cancer from Histopathological ImagesAl Zorgani, Maisun M., Irfan, Mehmood,, Ugail, Hassan 25 March 2022 (has links)
No / Nowadays, there is no argument that deep learning algorithms provide impressive results in many applications of medical image analysis. However, data scarcity problem and its consequences are challenges in implementation of deep learning for the digital histopathology domain. Deep transfer learning is one of the possible solutions for these challenges. The method of off-the-shelf features extraction from pre-trained convolutional neural networks (CNNs) is one of the common deep transfer learning approaches. The architecture of deep CNNs has a significant role in the choice of the optimal learning transferable features to adopt for classifying the cancerous histopathological image. In this study, we have investigated three pre-trained CNNs on ImageNet dataset; ResNet-50, DenseNet-201 and ShuffleNet models for classifying the Breast Cancer Histopathology (BACH) Challenge 2018 dataset. The extracted deep features from these three models were utilised to train two machine learning classifiers; namely, the K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) to classify the breast cancer grades. Four grades of breast cancer were presented in the BACH challenge dataset; these grades namely normal tissue, benign tumour, in-situ carcinoma and invasive carcinoma. The performance of the target classifiers was evaluated. Our experimental results showed that the extracted off-the-shelf features from DenseNet-201 model provide the best predictive accuracy using both SVM and KNN classifiers. They yielded the image-wise classification accuracy of 93.75% and 88.75% for SVM and KNN classifiers, respectively. These results indicate the high robustness of our proposed framework.
|
Page generated in 0.0901 seconds