In this thesis, we propose a new method for content-aware resizing of videos in real-
time. Our approach consists of two steps. First, we compute a set of non-salient pixels in
linear time which, when being removed or duplicated, do not alter the general appearance
of the video. This is an extension of Avidan and Shamir's [3] greedy seam-carving approach
to video. Second, we generate a new representation of the video, so called multi-view videos
that allow us to resize the video in real-time, i.e. while being watched. This representation
can be computed very effciently, the complexity is linear in the number of frames and linear
in the number of pixels in a video.
Our technique works on a broad variety of videos and is computationally inexpensive
enough to be executed by a vast range of devices. We compare our technique to our own
implementation of a current state-of-the-art approach and show several convincing results
obtained by our technique.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/26622 |
Date | 19 November 2008 |
Creators | Grundmann, Matthias |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Thesis |
Page generated in 0.002 seconds