• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 13
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluation of online hardware video stabilization on a moving platform / Utvärdering av hårdvarustabilisering av video i realtid på rörlig plattform

Gratorp, Eric January 2013 (has links)
Recording a video sequence with a camera during movement often produces blurred results. This is mainly due to motion blur which is caused by rapid movement of objects in the scene or the camera during recording. By correcting for changes in the orientation of the camera, caused by e.g. uneven terrain, it is possible to minimize the motion blur and thus, produce a stabilized video. In order to do this, data gathered from a gyroscope and the camera itself can be used to measure the orientation of the camera. The raw data needs to be processed, synchronized and filtered to produce a robust estimate of the orientation. This estimate can then be used as input to some automatic control system in order to correct for changes in the orientation This thesis focuses on examining the possibility of such a stabilization. The actual stabilization is left for future work. An evaluation of the hardware as well as the implemented methods are done with emphasis on speed, which is crucial in real time computing. / En videosekvens som spelas in under rörelse blir suddig. Detta beror främst på rörelseoskärpa i bildrutorna orsakade av snabb rörelse av objekt i scenen eller av kameran själv. Genom att kompensera för ändringar i kamerans orientering, orsakade av t.ex. ojämn terräng, är det möjligt att minimera rörelseoskärpan och på så sätt stabilisera videon. För att åstadkomma detta används data från ett gyroskop och kameran i sig för att skatta kamerans orientering. Den insamlade datan behandlas, synkroniseras och filtreras för att få en robust skattning av orienteringen. Denna orientering kan sedan användas som insignal till ett reglersystem för att kompensera för ändringar i kamerans orientering. Denna avhandling undersöker möjligheten för en sådan stabilisering. Den faktiska stabiliseringen lämnas till framtida arbete. Hårdvaran och de implementerade metoderna utvärderas med fokus på beräkningshastighet, som är kritiskt inom realtidssystem.
2

Improving the Utility of Egocentric Videos

Biao Ma (6848807) 15 August 2019 (has links)
<div>For either entertainment or documenting purposes, people are starting to record their life using egocentric cameras, mounted on either a person or a vehicle. Our target is to improve the utility of these egocentric videos. </div><div><br></div><div>For egocentric videos with an entertainment purpose, we aim to enhance the viewing experience to improve overall enjoyment. We focus on First-Person Videos (FPVs), which are recorded by wearable cameras. People record FPVs in order to share their First-Person Experience (FPE). However, raw FPVs are usually too shaky to watch, which ruins the experience. We explore the mechanism of human perception and propose a biometric-based measurement called the Viewing Experience (VE) score, which measures both the stability and the First-person Motion Information (FPMI) of a FPV. This enables us to further develop a system to stabilize FPVs while preserving their FPMI. Experimental results show that our system is robust and efficient in measuring and improving the VE of FPVs.</div><div><br></div><div>For egocentric videos whose goal is documentation, we aim to build a system that can centrally collect, compress and manage the videos. We focus on Dash Camera Videos (DCVs), which are used by people to document the route they drive each day. We proposed a system that can classify videos according to the route they drove using GPS information and visual information. When new DCVs are recorded, their bit-rate can be reduced by jointly compressing them with videos recorded on the similar route. Experimental results show that our system outperforms other similar solutions and the standard HEVC particularly in varying illumination.</div><div><br></div><div>The First-Person Video viewing experience topic and the Dashcam Video compression topic are two representations of applications rely on Visual Odometers (VOs): visual augmentation and robotic perception. Different applications have different requirement for VOs. And the performance of VOs are also influenced by many different factors. To help our system and other users that also work on similar applications, we further propose a system that can investigate the performance of different VOs under various factors. The proposed system is shown to be able to provide suggestion on selecting VOs based on the application.</div>
3

Computational video: post-processing methods for stabilization, retargeting and segmentation

Grundmann, Matthias 05 April 2013 (has links)
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.
4

Video Stabilization: Digital And Mechanical Approaches

Bayrak, Serhat 01 December 2008 (has links) (PDF)
General video stabilization techniques which are digital, mechanical and optical are discussed. Under the concept of video stabilization, various digital motion estimation and motion correction algorithms are implemented. For motion estimation, in addition to digital approach, a mechanical approach is implemented also. Then all implemented motion estimation and motion correction algorithms are compared with respect to their computational times and accuracies over various videos. For small amount of jitter, digital motion estimation performs well in real time. But for big amount of motion, digital motion estimation takes very long time so for these cases mechanical motion estimation is preferred due to its speed in estimation although digital motion estimation performs better. Thus, when mechanical motion estimation is used first and then this estimate is used as the initial estimate for digital motion estimation, the same accuracy as digital estimation is obtained in approximately the same time as mechanical estimation. For motion correction Kalman and Fuzzy filtering perform better than lowpass and moving average filtering.
5

Video stabilization and rectification for handheld cameras

Jia, Chao 26 June 2014 (has links)
Video data has increased dramatically in recent years due to the prevalence of handheld cameras. Such videos, however, are usually shakier compared to videos shot by tripod-mounted cameras or cameras with mechanical stabilizers. In addition, most handheld cameras use CMOS sensors. In a CMOS sensor camera, different rows in a frame are read/reset sequentially from top to bottom. When there is fast relative motion between the scene and the video camera, a frame can be distorted because each row was captured under a different 3D-to-2D projection. This kind of distortion is known as rolling shutter effect. Digital video stabilization and rolling shutter rectification seek to remove the unwanted frame-to-frame jitter and rolling shutter effect, in order to generate visually stable and pleasant videos. In general, we need to (1) estimate the camera motion, (2) regenerate camera motion, and (3) synthesize new frames. This dissertation aims at improving the first two steps of video stabilization and rolling shutter rectification. It has been shown that the inertial sensors in handheld devices can provide more accurate and robust motion estimation compared to vision-based methods. This dissertation proposes an online camera-gyroscope calibration method for sensor fusion while a user is capturing video. The proposed method uses an implicit extended Kalman filter and is based on multiple-view geometry in a rolling shutter camera model. It is able to estimate the needed calibration parameters online with all kinds of camera motion. Given the camera motion estimated from inertial sensors after the pro- posed calibration method, this dissertation first proposes an offline motion smoothing algorithm based on a 3D rotational camera motion model. The offline motion smoothing is formulated as a geodesic-convex regression problem on the manifold of rotation matrix sequences. The formulated problem is solved by an efficient two-metric projection algorithm on the manifold. The geodesic-distance-based smoothness metric better exploits the manifold structure of sequences of rotation matrices. Then this dissertation proposes two online motion smoothing algorithms that are also based on a 3D rotational camera motion model. The first algorithm extends IIR filtering from Euclidean space to the nonlinear manifold of 3D rotation matrices. The second algorithm uses unscented Kalman filtering on a constant angular velocity model. Both offline and online motion smoothing algorithms are constrained to guarantee that no black borders intrude into the stabilized frames. / text
6

Onboard Video Stabilization for Unmanned Air Vehicles

Cross, Nicholas Stewart 01 June 2011 (has links)
Unmanned Air Vehicles (UAVs) enable the observation of hazardous areas without endangering a pilot. Observational capabilities are provided by on-board video cameras and images are relayed to remote operators for analysis. However, vibration and wind cause video camera mounts to move and can introduce unintended motion that makes video analysis more difficult. Video stabilization is a process that attempts to remove unwanted movement from a video input to provide a clearer picture. This thesis presents an onboard video stabilization solution that removes high-frequency jitter, displays output at 20 frames per second (FPS), and runs on a Blackfin embedded processor. Any video stabilization algorithm will have to contend with the limited space, weight, and power available for embedded systems hardware on a UAV. This thesis demonstrates how architecture-specific optimizations improve algorithm performance on embedded systems and allow an algorithm that was designed with more powerful computing systems in mind to perform on a system that is limited in both size and resources. These optimizations reduce the total clock cycles per frame by 157 million to 30 million, which yields a frame rate increase from 3.2 to 20 FPS.
7

A Surveillance System to Create and Distribute Geo-Referenced Mosaics Using SUAV Video

Andersen, Evan D. 14 June 2008 (has links)
Small Unmanned Aerial Vehicles (SUAVs) are an attractive choice for many surveillance tasks. However, video from an SUAV can be difficult to use in its raw form. In addition, the limitations inherent in the SUAV platform inhibit the distribution of video to remote users. To solve the problems with using SUAV video, we propose a system to automatically create geo-referenced mosiacs of video frames. We also present three novel techniques we have developed to improve ortho-rectification and geo-location accuracy of the mosaics. The most successful of these techniques is able to reduce geo-location error by a factor of 15 with minimal computational overhead. The proposed system overcomes communications limitations by transmitting the mosaics to a central server where there they can easily be accessed by remote users via the Internet. Using flight test results, we show that the proposed mosaicking system achieves real-time performance and produces high-quality and accurately geo-referenced imagery.
8

Fpga Implementation Of Real Time Digital Video Stabilization

Ozsarac, Ismail 01 February 2011 (has links) (PDF)
Video stabilization methods are classified as mechanical and digital. Mechanical methods are based on motion sensors. Digital methods are computer programs and classified into two as time domain and frequency domain based on the signal processing methods used for the motion analysis. Although, mechanical methods have good real time stabilization performance, they are not suitable for small platforms such as mobile robots. On the other hand, digital video stabilization methods are easy to implement on various hardware, however, they require high computational load and long processing time. Two different digital video stabilization methods, one frequency and one time domain algorithms, are implemented on FPGA to realize their real time performances. Also, the methods are implemented and tested in MATLAB. FPGA results are compared with MATLAB&rsquo / s to see the accuracy performance.The input video format is PAL of which frame period is 40ms. The FPGA implementation is capable of producing new stabilization data at every PAL frame which allows the implementation to be classified as real time. Also, the simulation and hardware tests show that FPGA implementation can reach the MATLAB accuracy performance.
9

Detekce anomálií v chování davu ve video-datech z dronu / Crowd Behavior Anomaly Detection in Drone Videodata

Bažout, David January 2021 (has links)
There have been lots of new drone applications in recent years. Drones are also often used in the field of national security forces. The aim of this work is to design and implement a tool intended for crowd behavior analysis in drone videodata. This tool ensures identification of suspicious behavior of persons and facilitates its localization. The main benefits include the design of a suitable video stabilization algorithm to stabilize small jitters, as well as trace back of the lost scene. Furthermore, two anomaly detectors were proposed, differing in the method of feature vector extraction and background modeling. Compared to the state of the art approaches, they achieved comparable results, but at the same time they brought the possibility of online data processing.
10

Video Stabilization and Target Localization Using Feature Tracking with Video from Small UAVs

Johansen, David Linn 27 July 2006 (has links) (PDF)
Unmanned Aerial Vehicles (UAVs) equipped with lightweight, inexpensive cameras have grown in popularity by enabling new uses of UAV technology. However, the video retrieved from small UAVs is often unwatchable due to high frequency jitter. Beginning with an investigation of previous stabilization work, this thesis discusses the challenges of stabilizing UAV based video. It then presents a software based computer vision framework and discusses its use to develop a real-time stabilization solution. A novel approach of estimating intended video motion is then presented. Next, the thesis proceeds to extend previous target localization work by allowing the operator to easily identify targets—rather than relying solely on color segmentation—to improve reliability and applicability in real world scenarios. The resulting approach creates a low cost and easy to use solution for aerial video display and target localization.

Page generated in 0.1754 seconds