Return to search

Automatic Calibration of Camera Parameters : A Steppingstone to Analysing Videos

The smartphone is commonly used for image and video capture. Due to the lightweight of the smartphone, small handshakes are noticeable in recorded videos. The image sensors in smartphones are typically CMOS, which introduces rolling shutter artefacts. Stabilisation is introduced to minimise the effects of the camera movement and artefacts. Stabilisation relies on accurate camera parameters. The manufacturer usually gives values to the camera parameters, but calibration is required to obtain the best results. Calibration is performed using a video and the affiliated parameters recorded by the smartphone while the video is recorded. Calibration consists of four components. The first one is feature point detection. Feature points are high contrast visual features that are easily detected. The second is optical flow. Optical flow is used to track the feature points between two consecutive frames. The third is an objective function. The points are stabilised, and the objective function for this case is set as a measurement of how much the stabilised points move. The final component is an optimisation method used to find the minimum of the objective function. In its simplest form, the calibration algorithm detects points in the first frame of a video, tracks them throughout the video, and uses the optimisation to minimise the corresponding objective function. Since the stabilisation depends on the calibration parameters, the calibrated values are obtained as the values that give the lowest objective function value. This method was refined to analyse frames and feature points and calibrate on a subset of a video. Due to the large number of camera units being produced each year, it is infeasible to calibrate every unit individually. Instead, one unit of each type is calibrated, and this calibration is used on all units of that type. One way to reach each unit individually and not rely on a general calibration is to introduce automatic calibration. Every unit would calibrate itself automatically as users record 'normal' videos. However, demands are placed on the calibration video, so it is recorded under controlled circumstances. This project aims to map which video properties can negatively affect the calibration, how these can be detected, and analyse the effects of calibrating on the parts of the video that do not contain these. The critical video properties were expected to be movements in the video, the camera movement, the distance to the feature points, and the feature point amount and spread. An object detection algorithm was used to check for the movement of objects in the video. First, the lack of movement was checked by comparing the relative rotation of consecutive frames. Secondly, the motion blur was analysed in two ways. The primary was to calculate the number of blurred pixels using exposure time and camera rotation. The other way to check for blur was to check the exposure time. Next, feature points located on objects close to the camera were eliminated by checking the relative movements of the feature points and removing the feature points that move more than others. To check for the feature point amount, a strict limit was placed. Finally, to check for the spread, the frame was divided into boxes, and a limit was placed on the number of points per box. The frame and feature point elimination methods were evaluated by letting three people record 'normal' user videos. Calibration was performed using reference calibration, which was not to remove any frames or feature points, and different frame and feature point elimination methods. The obtained calibrated values were evaluated. Some frame removal methods are slightly better than the reference calibration. However, it does not successfully eliminate all bad frames or manage to keep all good frames, so more efficient limits might need to be implemented. Only a subset of the 'normal' videos was sufficiently good to calibrate on. After improvements, adaptation to real-time implementation on a smartphone would be the next major step in obtaining real-time calibration on a smartphone.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-478521
Date January 2022
CreatorsHallén, Wilma
PublisherUppsala universitet, Avdelningen för systemteknik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUPTEC F, 1401-5757 ; UPTEC F 22028

Page generated in 0.0025 seconds