Return to search

Video Super-Resolution via Dynamic Local Filter Network

Video super-resolution (VSR) aims to give a satisfying estimation of a high-resolution (HR) image from multiple similar low-resolution (LR) images by exploiting their hidden redundancy. The rapid development of convolutional neural network (CNN) techniques provide numerous new possibilities to solve the VSR problem. Recent VSR methods combine CNN with motion compensation to cancel the inconsistencies among the LR images and merge them to an HR images. To compensate the motion, pixels in input frames are warped according to optical-flow-like information. In this procedure, trade-off has to be made between the distraction caused by spatio-temporal inconsistencies and the pixel-wise detail damage caused by the compensation.

We proposed a novel VSR method with the name, Video Super-Resolution via Dynamic Local Filter Network, and its upgraded edition, Video Super-Resolution with Compensation in Feature Extraction.
Both methods perform motion compensation via a dynamic local filter network, which processes the input images with dynamically generated filter kernels. These kernels are sample-specific and position-specific. Therefore, our proposed methods can eliminate the inter-frame differences during feature extractions without explicitly manipulating pixels. The experimental results demonstrate that our methods outperform the state-of-the-art VSR algorithms in terms of PSNR and SSIM and recover more details with superior visual quality.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/37939
Date30 July 2018
CreatorsZhou, Yang
ContributorsZhao, Jiying
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf

Page generated in 0.002 seconds