• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 75
  • 63
  • 60
  • 55
  • 43
  • 37
  • 37
  • 33
  • 30
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Εκτίμηση οπτικής ροής χρησιμοποιώντας υπερδειγματοληπτημένες ακολουθίες βίντεο

Κατσένου, Αγγελική 21 May 2008 (has links)
Ένα σημαντικό πρόβλημα στην επεξεργασία ακολουθιών βίντεο είναι η εκτίμηση της κίνησης μεταξύ διαδοχικών πλαισίων βίντεο, που συχνά αναφέρεται και σαν εκτίμηση οπτικής ροής. Η εκτίμηση της κίνησης βρίσκει εφαρμογή σε μια πληθώρα εφαρμογών βίντεο, όπως για παράδειγμα στη συμπίεση (video compression), στην τρισδιάστατη εκτίμηση της δομής επιφανειών (3-D surface structure estimation), στη σύνθεση εικόνων υψηλής ανάλυσης (super-resolution) και στην κατάτμηση βάσει της κίνησης (motion-based segmentation). Οι πρόσφατες εξελίξεις στην τεχνολογία των αισθητήρων επιτρέπoυν τη λήψη πλαισίων βίντεο με υψηλούς ρυθμούς. Στη διεθνή βιβλιογραφία έχουν παρουσιασθεί τεχνικές που εκμεταλλεύονται την ακριβέστερη απεικόνιση της οπτικής ροής στην υπερδειγματοληπτημένη ακολουθία πλαισίων επιτυγχάνοντας με αυτόν τον τρόπο καλύτερη εκτίμηση της κίνησης στους τυπικούς ρυθμούς δειγματοληψίας των 30 πλαισίων/δευτ. Η υπολογιστική πολυπλοκότητα, και επομένως, και η χρησιμότητα των τεχνικών αυτών σε εφαρμογές πραγματικού χρόνου εξαρτώνται άμεσα από την πολυπλοκότητα του αλγορίθμου αντιστοίχισης, που χρησιμοποιείται για την εκτίμηση κίνησης. Στα πλαίσια της εργασίας αυτής θα μελετήθηκαν και υλοποιήθηκαν μερικές από τις πιο πρόσφατες τεχνικές που έχουν προταθεί στη διεθνή βιβλιογραφία και αναπτύχθηκε μια αποδοτικότερη (από άποψη πολυπλοκότητας) τεχνική αντιστοίχησης, η οποία όμως συγχρόνως δεν υστερεί σε ακρίβεια. / A significant problem in video processing is the motion estimation between two adjacent video frames, which is often called optical flow estimation. The motion estimation is applicable for a number of different fields of interest like video compression, 3-D surface structure estimation, super-resolution images and motion based segmentation. Recent evolution of sensors’ technology has allowed the capture of video frames at high rates. Several techniques using these video sequences have been presented in recent scientific and technological publications. These techniques are exploiting the better representation and achieve more accurate optical flow estimation at the standard frame rate (30 frames per second). The computational complexity and the ease-of-use of those techniques is in accordance with the complexity of the matching algorithm used for motion estimation. Some of the state-of-the-art algorithms have been studied and implemented during this diploma thesis. Besides this, a more efficient and accurate matching technique has been proposed.
72

GAZE ESTIMATION USING SCLERA AND IRIS EXTRACTION

Periketi, Prashanth Rao 01 January 2011 (has links)
Tracking gaze of an individual provides important information in understanding the behavior of that person. Gaze tracking has been widely used in a variety of applications from tracking consumers gaze fixation on advertisements, controlling human-computer devices, to understanding behaviors of patients with various types of visual and/or neurological disorders such as autism. Gaze pattern can be identified using different methods but most of them require the use of specialized equipments which can be prohibitively expensive for some applications. In this dissertation, we investigate the possibility of using sclera and iris regions captured in a webcam sequence to estimate gaze pattern. The sclera and iris regions in the video frame are first extracted by using an adaptive thresholding technique. The gaze pattern is then determined based on areas of different sclera and iris regions and distances between tracked points along the irises. The technique is novel as sclera regions are often ignored in eye tracking literature while we have demonstrated that they can be easily extracted from images captured by low-cost camera and are useful in determining the gaze pattern. The accuracy and computational efficiency of the proposed technique is demonstrated by experiments with human subjects.
73

Speeding Up Gibbs Sampling in Probabilistic Optical Flow

Piao, Dongzhen 01 December 2014 (has links)
In today’s machine learning research, probabilistic graphical models are used extensively to model complicated systems with uncertainty, to help understanding of the problems, and to help inference and predict unknown events. For inference tasks, exact inference methods such as junction tree algorithms exist, but they suffer from exponential growth of cluster size and thus is not able to handle large and highly connected graphs. Approximate inference methods do not try to find exact probabilities, but rather give results that improve as algorithm runs. Gibbs sampling, as one of the approximate inference methods, has gained lots of traction and is used extensively in inference tasks, due to its ease of understanding and implementation. However, as problem size grows, even the faster algorithm needs a speed boost to meet application requirement. The number of variables in an application graphical model can range from tens of thousands to billions, depending on problem domain. The original sequential Gibbs sampling may not return satisfactory result in limited time. Thus, in this thesis, we investigate in ways to speed up Gibbs sampling. We will study ways to do better initialization, blocking variables to be sampled together, as well as using simulated annealing. These are the methods that modifies the algorithm itself. We will also investigate in ways to parallelize the algorithm. An algorithm is parallelizable if some steps do not depend on other steps, and we will find out such dependency in Gibbs sampling. We will discuss how the choice of different hardware and software architecture will affect the parallelization result. We will use optical flow problem as an example to demonstrate the various speed up methods we investigated. An optical flow method tries to find out the movements of small image patches between two images in a temporal sequence. We demonstrate how we can model it using probabilistic graphical model, and solve it using Gibbs sampling. The result of using sequential Gibbs sampling is demonstrated, with comparisons from using various speed up methods and other optical flow methods.
74

Αναγνώριση κινδύνου σύγκρουσης σε κινούμενο αυτοκίνητο

Μαλέας, Νικόλαος 11 August 2011 (has links)
Η παρούσα διπλωματική εργασία εκπονήθηκε στο εργαστήριο ενσύρματης τηλεπικοινωνίας του τμήματος Ηλεκτρολόγων Μηχανικών και Τεχνολογίας Υπολογιστών. Πρόκειται για μια εργασία, η οποία έχει ως στόχο τη δημιουργία ενός συστήματος αναγνώρισης που θα αναγνωρίζει τον κίνδυνο που υπάρχει στο δρόμο λόγω των διασταυρούμενων οχημάτων, και θα μας ειδοποιεί. Σκοπός ήταν η κατασκευή ενός συστήματος, το οποίο έχει την ικανότητα να διακρίνει τα αυτοκίνητα από όλα τα αλλά αντικείμενα που εντοπίζονται κατά τη διάρκεια μιας πορείας με αμάξι. Αυτό επετεύχθη με τη χρήση μεθόδων οπτικής ροής και χρήση κατωφλιών σύμφωνα με τις οποίες λαμβάνονταν η απόφαση για ύπαρξη κινδύνου. / This study has been prepared in Wire Communications Laboratory, Department of Electrical and Computer Engineering. This is a work which aims to create an identification system that recognizes the danger on the road because of other vehicle and returns notifications . The aim was to build a system which is able to distinguish the cars that are moving dangerously towards us, excluding all the other objects we found during a journey with car. This goal was achieved by using optical flow and Thresholding methods . According to these results a decision is made and notifications are given in the case of existing danger .
75

Αναγνώριση κινδύνου σύγκρουσης κινούμενου αυτοκίνητου με προπορευόμενο όχημα με επεξεργασία σημάτων video

Κακαρούντας, Δημήτριος 09 January 2012 (has links)
Το θέμα της παρούσας διπλωματικής εργασίας είναι η μελέτη των διαφόρων τεχνικών ανίχνευσης οχημάτων και αντικειμένων στους δρόμους με την χρήση ψηφιακής βιντεοκάμερας. / The subject of this degree thesis is the study of various techniques of tracking vehicles and objects using digital video camera.
76

[en] A METHOD FOR OPTICAL FLOW EVALUATION CONSIDERING RELIABILITY ESTIMATION / [pt] UM MÉTODO PARA O CÁLCULO DE FLUXO ÓTICO COM ESTIMATIVA DE CONFIABILIDADE

LUIZ EDUARDO A. SAUERBRONN 03 June 2002 (has links)
[pt] Muitos sistemas biológicos utilizam visão como forma primária de sensoriamento.Ao longo de milhões de anos de evolução, as diferentes espécies vêm demonstrando o potencial associado à capacidade de visão. A partir da década de 60, foram iniciados os primeiros estudos no sentido de proporcionar às máquinas esta forma de sensoriamento. A esta nova forma de sensoriamento dá-se o nome de Visão Computacional. Em Visão Computacional, muitos casos requerem a determinação de um campo vetorial que descreva os deslocamentos ocorridos entre dois quadros consecutivos de uma seqüência genérica de vídeo. A este campo vetorial dá-se o nome de Optical Flow (Fluxo Ótico). A determinação do Optical Flow é ainda um problema sem solução. No presente trabalho, propõe-se um novo estimador estatístico para a determinação do Fluxo Ótico. Este estimador possui complexidade O(n) e associa um grau de confiabilidade a cada estimativa realizada. É aplicável a qualquer sinal digital (não apenas imagens ou vídeo, mas também a som, volume, etc) e vem demonstrando esultados muito promissores. / [en] Many biological systems make use of vision as its primary sensory mechanism. During million years, different species have been showing the great potential associated with vision. From the early sixties onwards, studies have been done to provide machines with this important sense. The research area involved in this task is called Computer Vision. In Computer Vision there are many situations where it is necessary to evaluate a vector field which describes existing displacements between two consecutive frames of a generic video sequence. This vector field is called Optical Flow. The Optical Flow determination is still a problem with unknown solution. This work proposes a new statistic algorithm to estimate the Optical Flow. The proposed algorithm has O(n) complexity and associates a degree of reliability to each estimation.The algorithm can be applied to any digital signal (not only images or videos, but also sound, volume etc) and is achieving promising results.
77

Motion control using optical flow of sparse image features

Seebacher, J. Paul 12 March 2016 (has links)
Reactive motion planning and local navigation of robots remains a significant challenge in the motion control of robotic vehicles. This thesis presents new results on vision guided navigation using optical flow. By detecting key image features, calculating optical flow and leveraging time-to-transit (tau) as a feedback signal, control architectures can steer a vehicle so as to avoid obstacles while simultaneously using them as navigation beacons. Averaging and balancing tau over multiple image features successfully guides a vehicle along a corridor while avoiding looming objects in the periphery. In addition, the averaging strategy deemphasizes noise associated with rotationally induced flow fields, mitigating risks of positive feedback akin to the Larsen effect. A recently developed, biologically inspired, binary-key point description algorithm, FReaK, offers process speed-ups that make vision-based feedback signals achievable. A Parrot ARDrone2 has proven to be a reliable platform for testing the architecture and has demonstrated the control law's effectiveness in using time-to-transit calculations for real-time navigation.
78

Contrôle réactif d'écoulements décollés à l'aide de PIV temps réel / Closed-loop control separated flows using real-time PIV

Varon, Eliott 13 October 2017 (has links)
Les écoulements décollés sont omniprésents dans la nature comme dans les écoulements industriels (aérodynamique externe des véhicules, des bâtiments, écoulements autour d’aubes de turbines, aérodynamique interne dans des tuyaux...) où ils sont en général sources de nuisances (vibrations, bruit aéroacousitque, forces de traînée ou de portance). Les enjeux associés à la compréhension et à la maîtrise de tels écoulements, caractérisés par une bulle de recirculation, sont donc considérables.Un capteur "visuel" non invasif développé au laboratoire PMMH est d'abord amélioré afin d'accéder en temps réel aux champs de vitesses - et à leurs grandeurs dérivées - des écoulements rencontrés en soufflerie industrielle. Basé sur un algorithme de flot optique issu de la vision par ordinateur, cette approche expérimentale novatrice permet de faciliter les études paramétriques et peut être implémenté dans des boucles de contrôle réactif.Ensuite, les mesures obtenues pour un écoulement sur une plaque plane sont analysées dans le cadre de l'identification de système. Un modèle d’ordre réduit est alors construit par apprentissage, permettant de prédire la dynamique de la transition de la couche limite laminaire vers la turbulence.Enfin, le sillage pleinement turbulent derrière une géométrie modélisant une voiture simplifiée est caractérisé, de façon classique et en tant que système dynamique. Différentes modifications de l'écoulement à l'aide de micro-jets sont testées. Une loi de contrôle réactif consistant à suivre et forcer la recirculation est mise en œuvre avec succès. / Separated flows are ubiquitous in nature and industrial systems, such as diffusers, airfoils, air conditioning plants, moving vehicles... As the separation can strongly influence the performances of such devices, investigating their dynamics and their control is of great interest.A visual sensor developed at PMMH laboratory is first improved to measure in real time the velocity fields and its derived values for flows available in wind tunnels. Based on an optical flow algorithm from the computer vision domain, this new experimental approach makes easier parametric studies and may be used in closed-loop controls.The dynamics of the flow over a flat plate are then investigated. A system identification method - the dynamic observer - is successfully implemented to build a reduced-order model of the transient flow, which captures and predicts well the instabilities generated.Finally, the fully turbulent wake of the square-back Ahmed body is described. Dynamical system tools are applied to characterize it. Using continuous and pulsed micro-jets, different forcing strategies are analyzed. An opposition closed-loop control is implemented, tracking and driving the recirculation.
79

Pose Estimation in an Outdoors Augmented Reality Mobile Application

Nordlander, Rickard January 2018 (has links)
This thesis proposes a solution to the pose estimation problem for mobile devices in an outdoors environment. The proposed solution is intended for usage within an augmented reality application to visualize large objects such as buildings. As such, the system needs to provide both accurate and stable pose estimations with real-time requirements. The proposed solution combines inertial navigation for orientation estimation with a vision-based support component to reduce noise from the inertial orientation estimation. A GNSS-based component provides the system with an absolute reference of position. The orientation and position estimation were tested in two separate experiments. The orientation estimate was tested with the camera in a static position and orientation and was able to attain an estimate that is accurate and stable down to a few fractions of a degree. The position estimation was able to achieve centimeter-level stability during optimal conditions. Once the position had converged to a location, it was stable down to a couple of centimeters, which is sufficient for outdoors augmented reality applications.
80

Spatio-temporal descriptors for human action recognition / Reconnaissance d’action à partir de descripteurs spatio-temporels

Megrhi, Sameh 15 December 2014 (has links)
L'analyse et l’interprétation de contenus visuels et plus particulièrement la vidéo est un domaine de recherche de plus en plus attractif en raison du nombre important d'applications telles que la vidéo-surveillance, le résumé de films, l'indexation, les jeux vidéo, la robotique et la domotique. Dans cette thèse nous nous intéressons à la détection et à la reconnaissance d'actions humaines dans des séquences vidéo. Pour la partie détection des actions, nous avons introduit deux approches basées sur les points d'intérêts locaux. La première proposition est une méthode simple et efficace qui vise à détecter les mouvements humains ensuite contribuer à extraire des séquences vidéo décrivant des actions importantes. Afin d'atteindre cet objectif, les premières séquences vidéo sont segmentées en volumes de trames et groupes de points d’intérêts. Dans cette méthode, nous nous basons sur le suivi du mouvement des points d'intérêts. Nous avons utilisé, dans un premier lieu, des vidéos simples puis nous avons progressivement augmenté la complexité des vidéos en optant pour des scènes réalistes. Les jeux de données simples présentent généralement un arrière-plan statique avec un Seul acteur qui effectue une seule action unique ou bien la même action mais d'une manière répétitive. Nous avons ensuite testé la robustesse de la détection d'action proposée dans des jeux de données plus complexes réalistes recueillis à partir des réseaux sociaux. Nous avons introduit une approche de détection d'actions efficace pour résoudre le problème de la reconnaissance d'actions humaines dans les vidéos réalistes contenant des mouvements de caméra. Le mouvement humain est donc segmenté d'une manière spatio-temporelle afin de détecter le nombre optimal de trames suffisant pour effectuer une description vidéo. Les séquences sont décrites au moyen de descripteurs spatio-temporels. Nous avons proposé dans cette thèse deux nouveaux descripteurs spatio-temporels basés sur le suivi de la trajectoire des points d'intérêts. Les suivis et la description vidéo sont effectués sur les patchs vidéo qui contiennent un mouvement ou une partie d'un mouvement détecté par la segmentation réalisée lors de l'étape précédente. Nous nous sommes basés sur le descripteur SURF non seulement pour sa précision et mais surtout pour la rapidité. Le premier descripteur proposé est appelé ST-SURF basé sur une nouvelle combinaison du (SURF) et du flot optique. Le ST-SURF permet le suivi de la trajectoire des points d'intérêts tout en gardant les informations spatiales, pertinentes, provenant du SURF. Le deuxième descripteur proposé dans le cadre de cette thèse est un histogramme du mouvement de la trajectoire (HMTO). HMTO est basé sur la position ainsi que l'échelle relative à un SURF. Ainsi, pour chaque SURF détecté, nous définissons une région du voisinage du point d'intérêt en nous basant sur l'échelle. Pour le patch détecté, nous extrayons le flot optique d'une manière dense. Les trajectoires de mouvement sont ensuite générées pour chaque pixel en exploitant les composantes horizontale et verticale de flot optique (u, v). La précision de la description de la vidéo proposée est testée sur un ensemble de données complexes et un plus grand ensemble de données réalistes. Les descripteurs de vidéo proposés sont testés d'une manière simple puis en les fusionnants avec d'autres descripteurs. Les descripteurs vidéo ont été introduits dans un processus de classification basé sur le sac de mots et ont démontré une amélioration des taux de reconnaissance par rapport aux approches précédemment proposés dans l'état-de-l ‘art. / Due to increasing demand for video analysis systems in recent years, human action de-tection/recognition is being targeted by the research community in order to make video description more accurate and faster, especially for big datasets. The ultimate purpose of human action recognition is to discern automatically what is happening in any given video. This thesis aims to achieve this purpose by contributing to both action detection and recognition tasks. We thus have developed new description methods for human action recognition.For the action detection component we introduce two novel approaches for human action detection. The first proposition is a simple yet effective method that aims at detecting human movements. First, video sequences are segmented into Frame Packets (FPs) and Group of Interest Points (GIP). In this method we track the movements of Interest Points in simple controlled video datasets and then in videos of gradually increasing complexity. The controlled datasets generally contain videos with a static background and simple ac-tions performed by one actor. The more complex realistic datasets are collected from social networks.The second approach for action detection attempts to address the problem of human ac-tion recognition in realistic videos captured by moving cameras. This approach works by segmenting human motion, thus investigating the optimal sufficient frame number to per-form action recognition. Using this approach, we detect object edges using the canny edge detector. Next, we apply all the steps of the motion segmentation process to each frame. Densely distributed interest points are detected and extracted based on dense SURF points with a temporal step of N frames. Then, optical flows of the detected key points between two frames are computed by the iterative Lucas and Kanade optical flow technique, using pyramids. Since we are dealing with scenes captured by moving cameras, the motion of objects necessarily involves the background and/or the camera motion. Hence, we propose to compensate for the camera motion. To do so, we must first assume that camera motion exists if most points move in the same direction. Then, we cluster optical flow vectors using a KNN clustering algorithm in order to determine if the camera motion exists. If it does, we compensate for it by applying the affine transformation to each frame in which camera motion is detected, using as input parameters the camera flow magnitude and deviation. Finally, after camera motion compensation, moving objects are segmented using temporal differencing and a bounding box is drawn around each detected moving object. The action recognition framework is applied to moving persons in the bounding box. Our goal is to reduce the amount of data involved in motion analysis while preserving the most important structural features. We believe that we have performed action detection in the spatial and temporal domain in order to obtain better action detection and recognition while at the same time considerably reducing the processing time...

Page generated in 0.0396 seconds