• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 4
  • 1
  • Tagged with
  • 36
  • 36
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Identificação automática do comportamento do tráfego a partir de imagens de vídeo / Automatic identification of traffic behavior using video images

Marcomini, Leandro Arab 10 August 2018 (has links)
Este trabalho tem por objetivo propor um sistema computacional automático capaz de identificar, a partir de imagens de vídeos, o comportamento do tráfego veicular rodoviário. Todos os códigos gerados foram escritos em Python, com o uso da biblioteca OpenCV. O primeiro passo do sistema proposto foi remover o background do frame do vídeo. Para isso, foram testados três métodos disponíveis no OpenCV, com métricas baseadas em uma Matriz de Contingência. O MOG2 foi escolhido como melhor método, processando 64 FPS, com mais de 95% de taxa de exatidão. O segundo passo do sistema envolveu detectar, rastrear e agrupar features dos veículos em movimento. Para isso, foi usado o algoritmo de Shi-Tomasi, junto com funções de fluxo ótico para o rastreamento. No agrupamento, usou-se a distância entre os pixels e as velocidades relativas de cada feature. No passo final, foram extraídos tanto as informações microscópicas quanto as informações macroscópicas em arquivos de relatório. Os arquivos têm padrões definidos, salvos em CSV. Também foi gerado, em tempo de execução, um diagrama espaço-tempo. Desse diagrama, é possível extrair informações importantes para as operações de sistemas de transportes. A contagem e a velocidade dos veículos foram usadas para validar as informações extraídas, comparadas a métodos tradicionais de coletas. Na contagem, o erro médio em todos os vídeos foi de 12,8%. Na velocidade, o erro ficou em torno de 9,9%. / The objective of this research is to propose an automatic computational system capable to identify, based on video images, traffic behavior on highways. All written code was made using Python, with the OpenCV library. The first step of the proposed system is to subtract the background from the frame. We tested three different background subtraction methods, using a contingency table to extract performance metrics. We decided that MOG2 was the best method for this research, processing frames at 64 FPS and scoring more than 95% on accuracy rate. The second step of the system was to detect, track and group all moving vehicle features. We used Shi-Tomasi detection method with optical flow to track features. We grouped features with a mixture of distance between pixels and relative velocity. For the last step, the algorithm exported microscopic and macroscopic information on CSV files. The system also produced a space-time diagram at runtime, in which it was possible to extract important information to transportation system operators. To validate the information extracted, we compared vehicle counting and velocities with traditional extraction methods. The algorithm had a mean error rate of 12.8% on counting vehicles, while achieving 9.9% error rate in velocity.
22

Prioritized 3d Scene Reconstruction And Rate-distortion Efficient Representation For Video Sequences

Imre, Evren 01 August 2007 (has links) (PDF)
In this dissertation, a novel scheme performing 3D reconstruction of a scene from a 2D video sequence is presented. To this aim, first, the trajectories of the salient features in the scene are determined as a sequence of displacements via Kanade-Lukas-Tomasi tracker and Kalman filter. Then, a tentative camera trajectory with respect to a metric reference reconstruction is estimated. All frame pairs are ordered with respect to their amenability to 3D reconstruction by a metric that utilizes the baseline distances and the number of tracked correspondences between the frames. The ordered frame pairs are processed via a sequential structure-from-motion algorithm to estimate the sparse structure and camera matrices. The metric and the associated reconstruction algorithm are shown to outperform their counterparts in the literature via experiments. Finally, a mesh-based, rate-distortion efficient representation is constructed through a novel procedure driven by the error between a target image, and its prediction from a reference image and the current mesh. At each iteration, the triangular patch, whose projection on the predicted image has the largest error, is identified. Within this projected region and its correspondence on the reference frame, feature matches are extracted. The pair with the least conformance to the planar model is used to determine the vertex to be added to the mesh. The procedure is shown to outperform the dense depth-map representation in all tested cases, and the block motion vector representation, in scenes with large depth range, in rate-distortion sense.
23

Identificação automática do comportamento do tráfego a partir de imagens de vídeo / Automatic identification of traffic behavior using video images

Leandro Arab Marcomini 10 August 2018 (has links)
Este trabalho tem por objetivo propor um sistema computacional automático capaz de identificar, a partir de imagens de vídeos, o comportamento do tráfego veicular rodoviário. Todos os códigos gerados foram escritos em Python, com o uso da biblioteca OpenCV. O primeiro passo do sistema proposto foi remover o background do frame do vídeo. Para isso, foram testados três métodos disponíveis no OpenCV, com métricas baseadas em uma Matriz de Contingência. O MOG2 foi escolhido como melhor método, processando 64 FPS, com mais de 95% de taxa de exatidão. O segundo passo do sistema envolveu detectar, rastrear e agrupar features dos veículos em movimento. Para isso, foi usado o algoritmo de Shi-Tomasi, junto com funções de fluxo ótico para o rastreamento. No agrupamento, usou-se a distância entre os pixels e as velocidades relativas de cada feature. No passo final, foram extraídos tanto as informações microscópicas quanto as informações macroscópicas em arquivos de relatório. Os arquivos têm padrões definidos, salvos em CSV. Também foi gerado, em tempo de execução, um diagrama espaço-tempo. Desse diagrama, é possível extrair informações importantes para as operações de sistemas de transportes. A contagem e a velocidade dos veículos foram usadas para validar as informações extraídas, comparadas a métodos tradicionais de coletas. Na contagem, o erro médio em todos os vídeos foi de 12,8%. Na velocidade, o erro ficou em torno de 9,9%. / The objective of this research is to propose an automatic computational system capable to identify, based on video images, traffic behavior on highways. All written code was made using Python, with the OpenCV library. The first step of the proposed system is to subtract the background from the frame. We tested three different background subtraction methods, using a contingency table to extract performance metrics. We decided that MOG2 was the best method for this research, processing frames at 64 FPS and scoring more than 95% on accuracy rate. The second step of the system was to detect, track and group all moving vehicle features. We used Shi-Tomasi detection method with optical flow to track features. We grouped features with a mixture of distance between pixels and relative velocity. For the last step, the algorithm exported microscopic and macroscopic information on CSV files. The system also produced a space-time diagram at runtime, in which it was possible to extract important information to transportation system operators. To validate the information extracted, we compared vehicle counting and velocities with traditional extraction methods. The algorithm had a mean error rate of 12.8% on counting vehicles, while achieving 9.9% error rate in velocity.
24

Facial Feature Tracking and Head Pose Tracking as Input for Platform Games

Andersson, Anders Tobias January 2016 (has links)
Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are defined as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experience. They also open up for more intuitive controls and can possibly give greater access to computers and video game consoles for certain disabled users with difficulties using their arms and/or fingers. This research explores using facial feature tracking to control a character's movements in a platform game. The aim is to interpret facial feature tracker data and convert facial feature movements to game input controls. The facial feature input is compared with other handsfree inputmethods, as well as traditional keyboard input. The other handsfree input methods that are explored are head pose estimation and a hybrid between the facial feature and head pose estimation input. Head pose estimation is a method where the application is extracting the angles in which the user's head is tilted. The hybrid input method utilises both head pose estimation and facial feature tracking. The input methods are evaluated by user performance and subjective ratings from voluntary participants playing a platform game using the input methods. Performance is measured by the time, the amount of jumps and the amount of turns it takes for a user to complete a platform level. Jumping is an essential part of platform games. To reach the goal, the player has to jump between platforms. An inefficient input method might make this a difficult task. Turning is the action of changing the direction of the player character from facing left to facing right or vice versa. This measurement is intended to pick up difficulties in controling the character's movements. If the player makes many turns, it is an indication that it is difficult to use the input method to control the character movements efficiently. The results suggest that keyboard input is the most effective input method, while it is also the least entertaining of the input methods. There is no significant difference in performance between facial feature input and head pose input. The hybrid input version has the best results overall of the alternative input methods. The hybrid input method got significantly better performance results than the head pose input and facial feature input methods, while it got results that were of no statistically significant difference from the keyboard input method. Keywords: Computer Vision, Facial Feature Tracking, Head Pose Tracking, Game Control / Moderna tekniker kan automatiskt extrahera och korrekt följa multipla landmärken från ansikten i videoströmmar. Landmärken från ansikten är definerat som punkter placerade på ansiktet utefter ansiktsdrag som till exempel ögat eller ansikts konturer. Detta öppnar upp för att använda ansiktsdragsrörelser som en teknik för handsfree människa-datorinteraktion. Dessa alternativ till traditionella tangentbord och spelkontroller kan användas för att göra datorer och spelkonsoler mer tillgängliga för vissa rörelsehindrade användare. Detta examensarbete utforskar användbarheten av ansiktsdragsföljning för att kontrollera en karaktär i ett plattformsspel. Målet är att tolka data från en appliktion som följer ansiktsdrag och översätta ansiktsdragens rörelser till handkontrollsinmatning. Ansiktsdragsinmatningen jämförs med inmatning med huvudposeuppskattning, en hybrid mellan ansikstdragsföljning och huvudposeuppskattning, samt traditionella tangentbordskontroller. Huvudposeuppskattning är en teknik där applikationen extraherar de vinklar användarens huvud lutar. Hybridmetoden använder både ansiktsdragsföljning och huvudposeuppskattning. Inmatningsmetoderna granskas genom att mäta effektivitet i form av tid, antal hopp och antal vändningar samt subjektiva värderingar av frivilliga testanvändare som spelar ett plattformspel med de olika inmatningsmetoderna. Att hoppa är viktigt i ett plattformsspel. För att nå målet, måste spelaren hoppa mellan plattformar. En inefektiv inmatningsmetod kan göra detta svårt. En vändning är när spelarkaraktären byter riktning från att rikta sig åt höger till att rikta sig åt vänster och vice versa. Ett högt antal vändningar kan tyda på att det är svårt att kontrollera spelarkaraktärens rörelser på ett effektivt sätt. Resultaten tyder på att tangentbordsinmatning är den mest effektiva metoden för att kontrollera plattformsspel. Samtidigt fick metoden lägst resultat gällande hur roligt användaren hade under spelets gång. Där var ingen statisktiskt signifikant skillnad mellan huvudposeinmatning och ansikstsdragsinmatning. Hybriden mellan ansiktsdragsinmatning och huvudposeinmatning fick bäst helhetsresultat av de alternativa inmatningsmetoderna. Nyckelord: Datorseende, Följning av Ansiktsdrag, Följning av Huvud, Spelinmatning
25

Video Stabilization and Target Localization Using Feature Tracking with Video from Small UAVs

Johansen, David Linn 27 July 2006 (has links) (PDF)
Unmanned Aerial Vehicles (UAVs) equipped with lightweight, inexpensive cameras have grown in popularity by enabling new uses of UAV technology. However, the video retrieved from small UAVs is often unwatchable due to high frequency jitter. Beginning with an investigation of previous stabilization work, this thesis discusses the challenges of stabilizing UAV based video. It then presents a software based computer vision framework and discusses its use to develop a real-time stabilization solution. A novel approach of estimating intended video motion is then presented. Next, the thesis proceeds to extend previous target localization work by allowing the operator to easily identify targets—rather than relying solely on color segmentation—to improve reliability and applicability in real world scenarios. The resulting approach creates a low cost and easy to use solution for aerial video display and target localization.
26

An Onboard Vision System for Unmanned Aerial Vehicle Guidance

Edwards, Barrett Bruce 17 November 2010 (has links) (PDF)
The viability of small Unmanned Aerial Vehicles (UAVs) as a stable platform for specific application use has been significantly advanced in recent years. Initial focus of lightweight UAV development was to create a craft capable of stable and controllable flight. This is largely a solved problem. Currently, the field has progressed to the point that unmanned aircraft can be carried in a backpack, launched by hand, weigh only a few pounds and be capable of navigating through unrestricted airspace. The most basic use of a UAV is to visually observe the environment and use that information to influence decision making. Previous attempts at using visual information to control a small UAV used an off-board approach where the video stream from an onboard camera was transmitted down to a ground station for processing and decision making. These attempts achieved limited results as the two-way transmission time introduced unacceptable amounts of latency into time-sensitive control algorithms. Onboard image processing offers a low-latency solution that will avoid the negative effects of two-way communication to a ground station. The first part of this thesis will show that onboard visual processing is capable of meeting the real-time control demands of an autonomous vehicle, which will also include the evaluation of potential onboard computing platforms. FPGA-based image processing will be shown to be the ideal technology for lightweight unmanned aircraft. The second part of this thesis will focus on the exact onboard vision system implementation for two proof-of-concept applications. The first application describes the use of machine vision algorithms to locate and track a target landing site for a UAV. GPS guidance was insufficient for this task. A vision system was utilized to localize the target site during approach and provide course correction updates to the UAV. The second application describes a feature detection and tracking sub-system that can be used in higher level application algorithms.
27

Recognition Of Human Face Expressions

Ener, Emrah 01 September 2006 (has links) (PDF)
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
28

Vision Based Attitude Control

Hladký, Maroš January 2018 (has links)
The problematics of precise pointing and more specifically an attitude control is present sincethe first days of flight and Aerospace engineering. The precise attitude control is a matter ofnecessity for a great variety of applications. In the air, planes or unmanned aerial vehicles needto be able to orient precisely. In Space, a telescope or a satellite relies on the attitude control toreach the stars or survey the Earth. The attitude control can be based on various principles, pre-calculated variables, and measurements. It is common to use the gyroscope, Sun/Star/horizonsensors for attitude determination. While those technologies are well established in the indus-try, the rise in a computational power and efficiency in recent years enabled processing of aninfinitely more rich source of information - the vision. In this Thesis, a visual system is used forthe attitude determination and is blended together with a control algorithm to form a VisionBased Attitude Control system.A demonstrator is designed, build and programmed for the purpose of Vision Based AttitudeControl. It is based on the principle of Visual servoing, a method that links image measure-ments to the attitude control, in a form of a set of joint velocities. The intermittent steps arethe image acquisition and processing, feature detection, feature tracking and the computationof joint velocities in a closed loop control scheme. The system is then evaluated in a barrage ofpartial experiments.The results show, that the used detection algorithms, Shi&Tomasi and Harris, performequally well in feature detection and are able to provide a high amount of features for tracking.The pyramidal implementation of the Lucas&Kanade tracking algorithm proves to be a capablemethod for a reliable feature tracking, invariant to rotation and scale change. To further evaluatethe Visual servoing a complete demonstrator is tested. The demonstrator shows the capabilityof Visual Servoing for the purpose of Vision Based Attitude Control. An improvement in thehardware and implementation is recommended and planned to push the system beyond thedemonstrator stage into an applicable system.
29

Robustní detekce pohybujících se objektů ve videu / Robust Detection of Moving Objects in Video

Klicnar, Lukáš January 2012 (has links)
Motion segmentation is an important process for separating moving objects from the background. Common methods usually assume fixed camera, other approaches exist as well, but they are usually very computational intensive. This work presents an approach for scene segmentation to regions with coherent motion, which works faster than similar methods and it is capable of online processing with no prior knowledge of objects or camera. The main assumption is that the points belonging to a single objects are moving together and this applies as well in the opposite direction. The proposed method is based on tracking of feature points and searching for groups with similar motion by using RANSAC-based algorithm. Short-range repair of broken tracks is applied to increase the overall robustness of tracking. Found clusters are subsequently processed to represent separate moving objects.
30

Prognostische Relevanz der Magnetresonanztomographie-Feature-Tracking-basierten quantifizierten Vorhoffunktion nach akutem Myokardinfarkt / Prognostic relevance of magnetic resonance imaging feature tracking-based quantified atrial function after acute myocardial infarction

Navarra, Jenny-Lou 08 January 2020 (has links)
No description available.

Page generated in 0.2608 seconds