Bokeh is defined as a soft out of focus blur. An image with bokeh has a subject in focus and an artistically blurry background. To capture images with real bokeh, specific camera parameter choices need to be made. One essential choice is to use a big lens with a wide aperture. Because of smartphone cameras’ small size, it becomes impossible to achieve real bokeh. Commonly, new models of smartphones have artificial bokeh implemented when taking pictures, but it is uncommon to be able to capture videos with artificial bokeh. Video segmentation is more complicated than image segmentation because it puts a higher demand on performance. The result should also be temporally consistent. In this project, the aim is to create a method that can apply real-time video bokeh on a smartphone. The project consists of two parts. The first part is to segment the subject of the video. This process is performed with convolutional neural networks. Three image segmentation networks were implemented for video, trained, and evaluated. The model that illustrated the most potential was the SINet model and was chosen as the most suitable architecture for the task. The second part of the project is to manipulate the background to be aesthetically pleasing while at the same time mimicking real optics to some degree. This is achieved by creating a depth and contrast map. With the depth map, the background can be blurred based on depth. The shape of the bokeh shapes also varies with the depth. The contrast map is used to locate bokeh points. The main part of the project is the segmentation part. The result for this project is a method that achieves an accurate segmentation and creates an artistic background. The different architectures illustrated similar results in terms of accuracy but different in terms of inference time. Situations existed where the segmentation failed and included too much of the background. This could potentially be counteracted with a bigger and more varied dataset. The method is performed in real-time on a computer but no conclusions could be made if it works in real-time on a smartphone.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-478412 |
Date | January 2022 |
Creators | Kanon, Jerker |
Publisher | Uppsala universitet, Institutionen för informationsteknologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | UPTEC F, 1401-5757 ; 22021 |
Page generated in 0.0027 seconds