• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

RCNX: Residual Capsule Next

Narukkanchira Anilkumar, Arjun 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Machine learning models are rising every day. Most of the Computer Vision oriented machine learning models arise from Convolutional Neural Network’s(CNN) basic structure. Machine learning developers use CNNs extensively in Image classification, Object Recognition, and Image segmentation. Although CNN produces highly compatible models with superior accuracy, they have their disadvantages. Estimating pose and transformation for computer vision applications is a difficult task for CNN. The CNN’s functions are capable of learning only shift-invariant features of an image. These limitations give machine learning developers motivation towards generating more complex algorithms. Search for new machine learning models led to Capsule Networks. This Capsule Network was able to estimate objects’ pose in an image and recognize transformations to these objects. Handwritten digit classification is the task for which capsule networks are to solve at the initial stages. Capsule Networks outperforms all models for the MNIST dataset for handwritten digits, but to use Capsule networks for image classification is not a straightforward multiplication of parameters. By replacing the Capsule Network’s initial layer, a simple Convolutional Layer, with complex architectures in CNNs, authors of Residual Capsule Network achieved a tremendous change in capsule network applications without a high number of parameters. This thesis focuses on improving this recent Residual Capsule Network (RCN) to an extent where accuracy and model size is optimal for the Image classification task with a benchmark of the CIFAR-10 dataset. Our search for an exemplary capsule network led to the invention of RCN2: Residual Capsule Network 2 and RCNX: Residual Capsule NeXt. RCNX, as the next generation of RCN. They outperform existing architectures in the domain of Capsule networks, focusing on image classification such as 3-level RCN, DCNet, DC Net++, Capsule Network, and even outperforms compact CNNs like MobileNet V3. RCN2 achieved an accuracy of 85.12% with 1.95 Million parameters, and RCNX achieved 89.31% accuracy with 1.58 Million parameters on the CIFAR-10 benchmark.
2

RCNX: RESIDUAL CAPSULE NEXT

Arjun Narukkanchira Anilkumar (10702419) 10 May 2021 (has links)
<div>Machine learning models are rising every day. Most of the Computer Vision oriented</div><div>machine learning models arise from Convolutional Neural Network’s(CNN) basic structure.</div><div>Machine learning developers use CNNs extensively in Image classification, Object Recognition,</div><div>and Image segmentation. Although CNN produces highly compatible models with</div><div>superior accuracy, they have their disadvantages. Estimating pose and transformation for</div><div>computer vision applications is a difficult task for CNN. The CNN’s functions are capable of</div><div>learning only shift-invariant features of an image. These limitations give machine learning</div><div>developers motivation towards generating more complex algorithms.</div><div>Search for new machine learning models led to Capsule Networks. This Capsule Network</div><div>was able to estimate objects’ pose in an image and recognize transformations to these</div><div>objects. Handwritten digit classification is the task for which capsule networks are to solve</div><div>at the initial stages. Capsule Networks outperforms all models for the MNIST dataset for</div><div>handwritten digits, but to use Capsule networks for image classification is not a straightforward</div><div>multiplication of parameters. By replacing the Capsule Network’s initial layer, a</div><div>simple Convolutional Layer, with complex architectures in CNNs, authors of Residual Capsule</div><div>Network achieved a tremendous change in capsule network applications without a high</div><div>number of parameters.</div><div>This thesis focuses on improving this recent Residual Capsule Network (RCN) to an</div><div>extent where accuracy and model size is optimal for the Image classification task with a</div><div>benchmark of the CIFAR-10 dataset. Our search for an exemplary capsule network led to</div><div>the invention of RCN2: Residual Capsule Network 2 and RCNX: Residual Capsule NeXt.</div><div>RCNX, as the next generation of RCN. They outperform existing architectures in the domain</div><div>of Capsule networks, focusing on image classification such as 3-level RCN, DCNet, DC</div><div>Net++, Capsule Network, and even outperforms compact CNNs like MobileNet V3.</div><div>RCN2 achieved an accuracy of 85.12% with 1.95 Million parameters, and RCNX achieved</div><div>89.31% accuracy with 1.58 Million parameters on the CIFAR-10 benchmark.</div>

Page generated in 0.2348 seconds