Real-Time Recognition of Planar Targets on Mobile Devices. A Framework for Fast and Robust Homography EstimationBazargani, Hamid January 2014 (has links)
The present thesis is concerned with the problem of robust pose estimation for planar targets in the context of real-time mobile vision. As a consequence of this research, individual developments made in isolation by earlier researchers are here considered together. Several adaptations to the existing algorithms are undertaken yielding a unified framework for robust pose estimation. This framework is specifically designed to meet the growing demand for fast and robust estimation on power-constrained platforms. For robust recognition of targets at very low computational costs, we employ feature based methods which are based on local binary descriptors allowing fast feature matching at run-time. The matching set is then fed to a robust parameter estimation algorithm in order to obtain a reliable homography. On the basis of our experimental results, it can be concluded that reliable homography estimates can be obtained using a device-friendly implementation of the Gaussian Elimination algorithm. We also show in this thesis that our simplified approach can significantly improve the homography estimation step in a hypothesize-and-verify scheme. The author's attention is focused not only on developing fast algorithms for the recognition framework but also on the optimized implementation of such algorithms. Any other recognition framework would similarly benefit from our optimized implementation.
Video inpainting techniques : application to object removal and error concealment / Techniques d’inpainting vidéo : application à la suppression des objets et à la dissimulation des erreursEbdelli, Mounira 20 June 2014 (has links)
Cette thèse présente des outils de vidéo inpainting permettant de reconstruire de manière efficace les zones perdues d'une séquence vidéo. Deux catégories d'approches sont particulièrement étudiées. Dans une première étape les approches basées sur l'exemple sont considérées. Différentes contributions ont été proposées. Une application des méthodes de neighbor embedding pour l'approximation des pixels perdus dans un exemple est d'abord considérée en utilisant deux méthodes de réduction de dimensionnalité: la factorisation de matrice non négative (FMN) et le locally linear embedding (LLE). La méthode d'inpainting proposée a été ensuite adaptée à l'application de dissimulation d'erreurs en utilisant une étape de pré-traitement d'estimation des vecteurs de mouvement perdus. Une approche multisolution a également été considérée pour réduire la complexité. Les évaluations expérimentales de cette approche démontrent son efficacité dans les applications de suppression d'objets et de dissimulation des erreurs. Une deuxième catégorie de méthodes de vidéo inpaintinting a été par la suite étudiée en utilisant une approche basée sur l'optimisation globale d'une fonction d'énergie exprimant la cohérence spatio-temporelle de la région reconstruite. Enfin, le problème d'inpainting des vidéos capturées par des caméras en mouvement a été étudié. L'alignement des images en utilisant une homographie par région montre de meilleure performances que les méthodes classiques d'alignement par optimisation d'une homography par pixel. / This thesis presents video inpainting tools to efficiently recover space-time holes in different kinds of video sequences. Two categories of video inpainting approaches are particularly studied. The first category concerns exemplar-based approach. Several contributions have been proposed for this approach. Neighbor embedding techniques have been proposed for patch sampling using two data dimensionality reductions methods: non-negative matrix factorization (NMF) and locally linear embedding (LLE). An analysis of similarity metrics for patches matching have then been proposed based on both subjective and objective tests. The proposed framework have been also adapted to the error concealment application by using a preprocessing step of motion estimation. A multiresolution approach has been considered to reduce the computational time of the method. The experimental evaluations demonstrate the effectiveness of the proposed video inpainting approach in both object removal and error concealment applications. The video inpainting problem has been also solved using a second approach based on the optimization of a well-defined cost function expressing the global consistency of the recovered regions. The camera moving videos has later been takled by using a region-based homography. The neighboring frames in the sequence are aligned based on segmented planar regions. This method has been shown to give better performance compared to classical optimization-based homography.
An Intelligent Portable Aerial Surveillance System: Modeling and Image StitchingDu, Ruixiang 29 May 2013 (has links)
"Unmanned Aerial Vehicles (UAVs) have been widely used in modern warfare for surveillance, reconnaissance and even attack missions. They can provide valuable battlefield information and accomplish dangerous tasks with minimal risk of loss of lives and personal injuries. However, existing UAV systems are far from perfect to meet all possible situations. One of the most notable situations is the support for individual troops. Besides the incapability to always provide images in desired resolution, currently available systems are either too expensive for large-scale deployment or too heavy and complex for a single solder. Intelligent Portable Aerial Surveillance System (IPASS), sponsored by the Air Force Research Laboratory (AFRL), is aimed at developing a low-cost, light-weight unmanned aerial vehicle that can provide sufficient battlefield intelligence for individual troops. The main contributions of this thesis are two-fold (1) the development and verification of a model-based flight simulation for the aircraft, (2) comparison of image stitching techniques to provide a comprehensive aerial surveillance information from multiple vision. To assist with the design and control of the aircraft, dynamical models are established at different complexity levels. Simulations with these models are implemented in Matlab to study the dynamical characteristics of the aircraft. Aerial images acquired from the three onboard cameras are processed after getting the flying platform built. How a particular image is formed from a camera and the general pipeline of the feature-based image stitching method are first introduced in the thesis. To better satisfy the needs of this application, a homography-based stitching method is studied. This method can greatly reduce computation time with very little compromise in the quality of the panorama, which makes real-time video display of the surroundings on the ground station possible. By implementing both of the methods for image stitching using OpenCV, a quantitative comparison in the performance is accomplished."
A Visual Return-to-Home System for GPS-Denied FlightLewis, Benjamin Paul 01 August 2016 (has links)
Unmanned aerial vehicle technology is rapidly maturing. In recent years, the sight of hobbyist aircraft has become more common. Corporations and governments are also interested in using drone aircraft for applications such as package delivery, surveillance and communications. These autonomous UAV technologies demand robust systems that perform under any circumstances. Many UAV applications rely on GPS to obtain information about their location and velocity. However, the GPS system has known vulnerabilities, including environmental signal degradation, terrestrial or solar weather, or malicious attacks such as GPS spoofing. These conditions occur with enough frequency to cause concern. Without a GPS signal, the state estimation in many autopilots quickly degrades. In the absence of a reliable backup navigation scheme, this loss of state will cause the aircraft to drift off course, and in many cases the aircraft will lose power or crash. While no single approach can solve all of the issues with GPS signal degradation, individual events can be addressed and solved. In this thesis, we present a system which will return an aircraft to its launch point upon the loss of GPS. This functionality is advantageous because it allows recovery of the UAV in circumstances which the lack of GPS information would make difficult. The system presented in this thesis accomplishes the return of the aircraft by means of onboard visual navigation, which removes the dependence of the aircraft on external sensors and systems. The system presented here uses an downward-facing onboard camera and computer to capture a string of overlapping images (keyframes) of the ground as the aircraft travels on its outbound journey. When a signal is received, the aircraft switches into return-to-home mode. The system uses the homography matrix and other vision processing techniques to produce information about the location of the current keyframe relative to the aircraft. This information is used to navigate the aircraft to the location of each saved keyframe in reverse order. As each keyframe is reached, the system programmatically loads the next target keyframe. By following the chain of keyframes in reverse, the system reaches the launch location. Contributions in this thesis include the return-to-home visual flight system for UAVs, which has been tested in simulation and with flight tests. Features of this system include methods for determining new keyframes and switching keyframes on the inbound flight, extracting data between images, and flight navigation based on this information. This system is a piece of the wider GPS-denied framework under development in the BYU MAGICC lab.
Camera-projector presentation systemZhuang, Ming-yin 08 June 2005 (has links)
As the popularity of the digital Web-cam¡Athese devices are more and more cheaper and powerful. We can apply computer vision techniques with camera and projector to build a more convenient presentation system. In presentation, sometimes due to the position of projector, the images appear the perspective distortion (keystone distortion). The user should manually adjust the position of projector or use the keystone corrections of the projector. But when the distortion is not trapezium, the built-in keystone corrections are not suitable in this situation. We present a computer-vision based method that uses a Web-cam to calibrate the keystone distortion. The Web-cam takes the images that the projector projected on the wall. If the Web-cam observes keystone distortions of the projected images, we use a geometric transform that pre-warps the images in the projector frame, such that these images appears rectangle with known aspect ratio after being projected on the wall. Besides, we implement the virtual buttons that allow users to interact with the computer. The virtual buttons means that when the camera detect the laser point is on the virtual buttons, computer triggers the event as the virtual button being pushed. This paper uses point-matching pairs to obtain the homography between camera image frame and source image frame. The homography, that is the fundamental of calibrating perspective distortions also help us to search the position of the laser point.
Joint Visual and Wireless Tracking SystemNott, Viswajith Karapoondi 01 January 2009 (has links)
Object tracking is an important component in many applications including surveillance, manufacturing, inventory tracking, etc. The most common approach is to combine a surveillance camera with an appearance-based visual tracking algorithm. While this approach can provide high tracking accuracy, the tracker can easily diverge in environments where there are much occlusions. In recent years, wireless tracking systems based on different frequency ranges are becoming more popular. While systems using ultra-wideband frequencies suffer similar problems as visual systems, there are systems that use frequencies as low as in those in the AM band to circumvent the problems of obstacles, and exploit the near-field properties between the electric and magnetic waves to achieve tracking accuracy down to about one meter. In this dissertation, I study the combination of a visual tracker and a low-frequency wireless tracker to improve visual tracking in highly occluded area. The proposed system utilizes two homographies formed between the world coordinates with the image coordinates of the head and the foot of the target person. Using the world coordinate system, the proposed system combines a visual tracker and a wireless tracker in an Extended Kalman Filter framework for joint tracking. Extensive experiments have been conducted using both simulations and real videos to demonstrate the validity of our proposed scheme.
Design and Implementation of Video View Synthesis for the CloudPouladzadeh, Parvaneh January 2017 (has links)
In multi-view video applications, view synthesis is a computationally intensive task that needs to be done correctly and efficiently in order to deliver a seamless user experience. In order to provide fast and efficient view synthesis, in this thesis, we present a cloud-based implementation that will be especially beneficial to mobile users whose devices may not be powerful enough for high quality view synthesis. Our proposed implementation balances the view synthesis algorithm’s components across multiple threads and utilizes the computational capacity of modern CPUs for faster and higher quality view synthesis. For arbitrary view generation, we utilize the depth map of the scene from the cameras’ viewpoint and estimate the depth information conceived from the virtual camera. The estimated depth is then used in a backward direction to warp the cameras’ image onto the virtual view. Finally, we use a depth-aided inpainting strategy for the rendering step to reduce the effect of disocclusion regions (holes) and to paint the missing pixels. For our cloud implementation, we employed an automatic scaling feature to offer elasticity in order to adapt the service load according to the fluctuating user demands. Our performance results using 4 multi-view videos over 2 different scenarios show that our proposed system achieves average improvement of 3x speedup, 87% efficiency, and 90% CPU utilization for the parallelizable parts of the algorithm.
A Foveated System for Wilderness Search and Rescue in Manned AircraftFenimore, Carson D. 23 November 2011 (has links) (PDF)
Wilderness search and rescue can be assisted by video searchers in manned aircraft. The video searcher's primary task is to find clues on the ground. Due to altitude, it may be difficult to resolve details on the ground with a standard video camera. As the video streams at a constant frame rate, the searcher may become distracted by other tasks. While handling these tasks the searcher may miss important clues or spend extra time flying over the search area; either outcome decreases both the effectiveness of the video searcher and the chances of successfully finding missing persons. We develop an efficient software system that allows the video searcher to deal with distractions while identifying, resolving, and geolocating clues using mixed-resolution video. We construct an inexpensive camera rig that feeds video and telemetry to this system. We also develop a simple flight simulator for generating synthetic search video for simulation and testing purposes. To validate our methods we conduct a user study and a field trial. An analysis of the user study results suggests that our system can combine the video streams without loss of performance in the primary or secondary search task. The resulting gains in screen-space efficiency can then be used to present more information, such as scene context or larger-resolution images. Additionally, the field trial suggests that the software is capable of robustly operating in a real-world environment.
IR Illumination-Assisted Smart Headlight Glare ReductionSanders, Larry Dean, Jr. 20 December 2017 (has links)
No description available.
Shape Recovery by Exploiting Planar Topology in 3D Projective SpaceLai, Po-Lun 24 August 2010 (has links)
No description available.
Page generated in 0.0777 seconds