Return to search

Deep Learning for Printed Image Quality

This research focuses on developing algorithms to automatically classify, detect, simulate and improve the quality of defective printed images since the human visual system is unreliable. With the development of deep learning algorithms, state-of-the-art accuracy could be achieved for many computer vision tasks. This research applies the deep learning method to printed image quality assessment. Because most deep learning approaches require a large amount of data even after data augmentation, we propose to use Generative Adversarial Networks for simulation images generation. The simulated images with artifacts could be used for training classifier, detector and corrector networks for printed image quality. Another essential preprocessing step for printed image quality assessment is image registration, which can detect the defect and difference between two input images. This research proposes to use the deep learning framework for global image registration by parallel computation acceleration. For deformable local registration, we implement the U-Net VoxelMorph-based method for printed image registration. Then we further propose the recurrent network-based method, R-RegNet. The experimental results show that the proposed R-RegNet method outperforms the U-Net VoxelMorph-based method in all three datasets that we considered. Finally, we propose a photorealistic image dataset simulation method for training deep neural networks. A new dataset with simulated images, named Extra FAT, is introduced for object detection and 6D pose estimation.

  1. 10.25394/pgs.19400282.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/19400282
Date20 April 2022
CreatorsJianhang Chen (12275537)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Deep_Learning_for_Printed_Image_Quality/19400282

Page generated in 0.0017 seconds