Alpha matting is an important topic in areas of computer vision. It has various applications, such as virtual reality, digital image and video editing, and image synthesis. The conventional approaches for alpha matting perform unsatisfactorily when they encounter complicated background and foreground. It is also difficult for them to extract alpha matte accurately when the foreground objects are transparent, semi-transparent, perforated or hairy. Fortunately, the rapid development of deep learning techniques brings new possibilities for solving alpha matting problems.
In this thesis, we propose a residual convolutional grid network for alpha matting, which is based on the convolutional neural networks (CNNs) and can learn the alpha matte directly from the original image and its trimap. Our grid network consists of horizontal residual convolutional computation blocks and vertical upsampling/downsampling convolutional computation blocks. By choosing different paths to pass information by itself, our network can not only retain the rich details of the image but also extract high-level abstract semantic information of the image. The experimental results demonstrate that our method can solve the matting problems that plague conventional matting methods for decades and outperform all the other state-of-the-art matting methods in quality and visual evaluation. The only matting method performs a little better than ours is the current best matting method. However, that matting method requires three times amount of trainable parameters compared with ours. Hence, our matting method is the best considering the computation complexity, memory usage, and matting performance.
Identifer | oai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/39467 |
Date | 23 July 2019 |
Creators | Zhang, Huizhen |
Contributors | Zhao, Jiying |
Publisher | Université d'Ottawa / University of Ottawa |
Source Sets | Université d’Ottawa |
Language | English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Page generated in 0.002 seconds