<div>First-person videos (FPVs) captured by wearable cameras provide a huge amount of visual data. FPVs have different characteristics compared to broadcast videos and mobile videos. The video quality of FPVs are influenced by motion blur, tilt, rolling shutter and exposure distortions. In this work, we design image and video assessment methods applicable for FPVs. </div><div><br></div><div>Our video quality assessment mainly focuses on three quality problems. The first problem is the video frame artifacts including motion blur, tilt, rolling shutter, that are caused by the heavy and unstructured motion in FPVs. The second problem is the exposure distortions. Videos suffer from exposure distortions when the camera sensor is not exposed to the proper amount of light, which often caused by bad environmental lighting or capture angles. The third problem is the increased blurriness after video stabilization. The stabilized video is perceptually more blurry than its original because the masking effect of motion is no longer present. </div><div><br></div><div>To evaluate video frame artifacts, we introduce a new strategy for image quality estimation, called mutual reference (MR), which uses the information provided by overlapping content to estimate the image quality. The MR strategy is applied to FPVs by partitioning temporally nearby frames with similar content into sets, and estimating their visual quality using their mutual information. We propose one MR quality estimator, Local Visual Information (LVI), that estimates the relative quality between two images which overlap.</div><div><br></div><div>To alleviate exposure distortions, we propose a controllable illumination enhancement method that adjusts the amount of enhancement with a single knob. The knob can be controlled by our proposed over-enhancement measure, Lightness Order Measure (LOM). Since the visual quality is an inverted U-shape function of the amount of enhancement, our design is to control the amount of enhancement so that the image is enhanced to the peak visual quality. </div><div><br></div><div>To estimate the increased blurriness after stabilization, we propose a visibility-inspired temporal pooling (VTP) mechanism. VTP mechanism models the motion masking effect on perceived video blurriness as the influence of the visibility of a frame on the temporal pooling weight of the frame quality score. The measure for visibility is estimated as the proportion of spatial details that is visible for human observers.</div>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/8209877 |
Date | 12 August 2019 |
Creators | Chen Bai (6760616) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/IMAGE_AND_VIDEO_QUALITY_ASSESSMENT_WITH_APPLICATIONS_IN_FIRST-PERSON_VIDEOS/8209877 |
Page generated in 0.0021 seconds