• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

ROBUST BACKGROUND SUBTRACTION FOR MOVING CAMERAS AND THEIR APPLICATIONS IN EGO-VISION SYSTEMS

Sajid, Hasan 01 January 2016 (has links)
Background subtraction is the algorithmic process that segments out the region of interest often known as foreground from the background. Extensive literature and numerous algorithms exist in this domain, but most research have focused on videos captured by static cameras. The proliferation of portable platforms equipped with cameras has resulted in a large amount of video data being generated from moving cameras. This motivates the need for foundational algorithms for foreground/background segmentation in videos from moving cameras. In this dissertation, I propose three new types of background subtraction algorithms for moving cameras based on appearance, motion, and a combination of them. Comprehensive evaluation of the proposed approaches on publicly available test sequences show superiority of our system over state-of-the-art algorithms. The first method is an appearance-based global modeling of foreground and background. Features are extracted by sliding a fixed size window over the entire image without any spatial constraint to accommodate arbitrary camera movements. Supervised learning method is then used to build foreground and background models. This method is suitable for limited scene scenarios such as Pan-Tilt-Zoom surveillance cameras. The second method relies on motion. It comprises of an innovative background motion approximation mechanism followed by spatial regulation through a Mega-Pixel denoising process. This work does not need to maintain any costly appearance models and is therefore appropriate for resource constraint ego-vision systems. The proposed segmentation combined with skin cues is validated by a novel application on authenticating hand-gestured signature captured by wearable cameras. The third method combines both motion and appearance. Foreground probabilities are jointly estimated by motion and appearance. After the mega-pixel denoising process, the probability estimates and gradient image are combined by Graph-Cut to produce the segmentation mask. This method is universal as it can handle all types of moving cameras.
2

Hardware Implementation Of An Active Feature Tracker For Surveillance Applications

Solmaz, Berkan 01 July 2008 (has links) (PDF)
The integration of image sensors and high performance processors into embedded systems enabled the development of intelligent vision systems. In this thesis, we developed an active autonomous system to be used for surveillance applications. The proposed system detects a single moving object in the field of view automatically and tracks it in a wide area by controlling the pan-tilt-zoom features of the camera. The system can also go to an alarm state to warn the user. The processing unit of the system is a Texas Instruments DM642 Evaluation Module which is a low-cost high performance video &amp / imaging development platform designed to develop and evaluate video based applications.
3

Exterior inspection of an aircraft using a Pan-Tilt-Zoom camera and a 3D scanner moved by a mobile robot : 2D image processing and 3D point cloud analysis / Inspection de l'extérieur d'un aéronef à partir d'une caméra Pan-Tilt-Zoom et d'un scanner 3D portés par un robot mobile : analyse d'images et de nuages de points 3D

Jovančević, Igor 21 November 2016 (has links)
Cette thèse s’inscrit dans le cadre d’un projet industriel multi-partenaires ayant pour objectif le développement d’un robot mobile collaboratif (un cobot), autonome dans ses mouvements au sol, capable de réaliser l’inspection visuelle d’un aéronef, à la fois en phase de petite ou grande maintenance dans un hangar ou en phase de pré-vol sur le tarmac d’un aéroport. Le cobot est équipé de capteurs lui permettant d’effectuer ses tâches de navigation autonome, mais également d’un ensemble de capteurs optiques constituant la tête d’inspection : une caméra orientable Pan-Tilt-Zoom et un scanner 3D qui délivrent respectivement des données sous forme d’images 2D et de nuages de points 3D. L’objectif de la thèse est de développer des algorithmes d’analyse d’images 2D et de nuages de points 3D, afin d’établir un diagnostic sur l’état de l’avion et son aptitude à voler. Nous avons développé des algorithmes pour vérifier certains éléments de l’appareil, tels que valves, portes, capteurs, pneus ou moteurs, et également pour détecter et caractériser des dommages 3D sur le fuselage (impacts, rayures, etc.). Nous avons exploité dans nos algorithmes les connaissances a priori disponibles, en particulier le modèle 3D CAO de l’avion (un AIRBUS A320 dans le cadre de nos essais). Durant ces travaux de la thèse, nous avons pu répondre à deux besoins (parfois antagonistes) : développer des algorithmes d’inspection rapides et robustes, mais également répondre aux exigences spécifiques d’un projet industriel qui visait à développer un prototype opérationnel. Nous nous sommes attachés à développer des algorithmes les plus génériques possibles, de manière à ce qu’ils puissent être utilisés pour d’autres types d’inspection, tels que l’inspection de bâtiments ou de navires par exemple. Nous avons aussi contribué au développement du prototype (robot mobile équipé de capteurs) en développant le module de contrôle des capteurs d’inspection et en intégrant nos codes sur le robot avec les autres modules développés par les partenaires. Le prototype a fait l’objet de nombreux essais en hangar de maintenance ou sur tarmac. / This thesis makes part of an industry oriented multi-partners project aimed at developing a mobile collaborative robot (a cobot), autonomous in its movements on the ground, capable of performing visual inspection of an aircraft during short or long maintenance procedures in the hangar or in the pre-flight phase on the tarmac. The cobot is equipped with sensors for realizing its navigation tasks as well as with a set of optical sensors which constitute the inspection head: an orientable Pan-Tilt-Zoom visible light camera and a three-dimensional scanner, delivering data in the format of two-dimensional images and three-dimensional point clouds, respectively. The goal of the thesis is to propose original approaches for processing 2D images and 3D clouds, with intention to make a decision with respect to the flight readiness of the airplane. We developed algorithms for verification of the aircraft items such as vents, doors, sensors, tires or engine as well as for detection and characterization of three-dimensional damages on the fuselage. We integrated a-priori knowledge on the airplane structure, notably numerical three-dimensional CAD model of the Airbus-A320. We argue that with investing effort to develop robust enough algorithms and with the help of existing optical sensors to acquire suitable data, we can come up with non-invasive, accurate, and time-efficient system for automatic airplane exterior inspection. The thesis work was placed in between two main requirements: develop inspection algorithms which could be as general as possible and also meet the specific requirements of an industry oriented project. Often, these two goals do not go along and the balance had to be made. On one side, we were aiming to design and assess the approaches that can be employed on other large structures, for ex. buildings, ships. On the other hand, writing source code for controlling sensors as well as integrating our whole developed source code with other modules on the real-time robotic system, were necessary in order to demonstrate the feasibility of our robotic prototype.
4

Algoritmos para composição automática defotografias. / Algorithms for automatic composition of photographs.

CAVALCANTI, Cláudio Sebastião Vasconcelos da Cunha. 17 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-17T13:42:57Z No. of bitstreams: 1 CLÁUDIO SABASTIÃO VASCONCELOS DA CUNHA CAVALCANTI - DISSERTAÇÃO PPGCC 2007..pdf: 19270985 bytes, checksum: b54c266f6cdb98e2cbbf305285b6ffdf (MD5) / Made available in DSpace on 2018-08-17T13:42:57Z (GMT). No. of bitstreams: 1 CLÁUDIO SABASTIÃO VASCONCELOS DA CUNHA CAVALCANTI - DISSERTAÇÃO PPGCC 2007..pdf: 19270985 bytes, checksum: b54c266f6cdb98e2cbbf305285b6ffdf (MD5) Previous issue date: 2007-07-30 / Além de ser uma das mais populares formas de arte, a fotografia também é uma forma de lazer e ferramenta de trabalho. Com a redução dos preços e a conseqüente popularização dos equipamentos e acessórios necessários à fotografia, especialmente o preço das câmeras digitais, é crescente o interesse por novos algoritmos e ferramentas que favoreçam a captura de imagens com maior qualidade. Diante do exposto, a presente dissertação objetivou a proposição e desenvolvimento de algoritmos capazes de detectar e corrigir falhas na composição fotográfica. As regras de composição fotográfica, em geral, são heurísticas utilizadas por fotógrafos que se difundiram a ponto de serem denominadas de “regras”. Mesmo não sendo consenso entre os fotógrafos, é possível que a implementação destas regras possa levar um fotógrafo amador, sem conhecimento prévio de fotografia, a produzir fotografias de alta qualidade e teor profissional. Neste trabalho são propostas duas alternativas para a correção da composição: um método para correção on-line, no qual a foto final só é obtida após satisfeitas algumas condições de qualidade, e outro para a correção off-line, o qual classifica (ou modifica) a imagem a posteriori. Para tanto, são utilizados algoritmos destinados à detecção e correção de problemas no posicionamento do tema. Os resultados foram avaliados em dois experimentos. No primeiro experimento, os usuários concordaram em até 65% com os resultados obtidos pelo sistema, através de uma análise subjetiva. No segundo experimento,foi mostrado como é possível, utilizando-se apenas uma câmera Pan-Tilt-Zoom (câmera dotada de três graus de liberdade sendo dois de rotação e um do campo de visão),localizar e fotografar pessoas em um determinado ambiente a partir das regras de composição desenvolvidas. / Besides being one of the most popular forms of art, photography is often used for a wide variety of purposes, including professional and entertainment ones. Nowadays, since cameras (specially digital ones) are less expensive and more popular, there is an increasing need for tools to help photographers (both amateurs and professionals) to obtain photographs of better quality. Within this context, algorithms for detection and correction of errors on the composition of a photograph are proposed in this work. Photographic composition rules are heuristics used by photographers, which became so wides pread that they are now also known as“rules”. Photographers, however, are not unanimous about the use of some of those rules. Despite of that it is possible that the use of photographic composition rules can improve the quality of amateur photographs, leveling them to a professional standard. In this dissertation, two approaches are proposed to automate composition rules: an on-line method, in which a picture is only taken when a number of conditions is satisfied; and an off-line method,which classifiers or corrects the image after it has been acquired. Hence, algorithms for detecting and correcting problems on subject positioning are used. Two experiments were used to evaluate the performance of the system. The first one shows that users agree with the correction performed on 65% of the photographs, through a subjective analysis. By using only a Pan-Tilt-Zoom camera and the composition rules implemented in this work, the second experiment shows howto locate and photograph human subjects ina given environment.

Page generated in 0.0325 seconds