• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 223
  • 223
  • 74
  • 63
  • 60
  • 55
  • 42
  • 37
  • 36
  • 33
  • 30
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Počítání lidí ve videu / Crowd Counting in Video

Kuřátko, Jiří January 2016 (has links)
This master's thesis prepared the programme which is able to follow the trajectories of the movement of people and based on this to create various statistics. In practice it is an effective marketing tool which can be used for instance for customer flow analyses, optimal evaluation of opening hours, visitor traffic analyses and for a lot of other benefits. Histograms of oriented gradients, SVM classificator and optical flow monitoring were used to solve this problem. The method of multiple hypothesis tracking was selected for the association data. The system's quality was evaluated from the video footage of the street with the large concentration of pedestrians and from the school's camera system, where the movement in the corridor was monitored and the number of people counted.
92

Microexpression Spotting in Video Using Optical Strain

Godavarthy, Sridhar 01 July 2010 (has links)
Microexpression detection plays a vital role in applications such as lie detection and psychological consultations. Current research is progressing in the direction of automating microexpression recognition by aiming at classifying the microexpressions in terms of FACS Action Units. Although high detection rates are being achieved, the datasets used for evaluation of these systems are highly restricted. They are limited in size - usually still pictures or extremely short videos; motion constrained; containing only a single microexpression and do not contain negative cases where microexpressions are absent. Only a few of these systems run in real time and even fewer have been tested on real life videos. This work proposes a novel method for automated spotting of facial microexpressions as a preprocessing step to existing microexpression recognition systems. By identifying and rejecting sequences that do not contain microexpressions, longer sequences can be converted into shorter, constrained, relevant sequences which comprise of only single microexpressions, which can then be passed as input to existing systems, improving their performance and efficiency. This method utilizes the small temporal extent of microexpressions for their identification. The extent is determined by the period for which strain, due to the non-rigid motion caused during facial movement, is impacted on the facial skin. The subject's face is divided into sub-regions, and facial strain is calculated for each of these regions. The strain patterns in individual regions are used to identify subtle changes which facilitate the detection of microexpressions. The strain magnitude is calculated using the central difference method over the robust and dense optical flow field of each subject's face. The computed strain is then thresholded using a variable threshold. If the duration for which the strain is above the threshold corresponds to the duration of a microexpression, detection is reported. The datasets used for algorithm evaluation are comprised of a mix of natural and enacted microexpressions. The results were promising with up to 80% true detection rate. Increased false positive spots in the Canal 9 dataset can be attributed to talking by the subjects causing fine movements in the mouth region. Performing speech detection to identify sequences where the subject is talking and excluding the mouth region during those periods could help reduce the number of false positives.
93

Measuring Respiratory Frequency Using Optronics and Computer Vision

Antonsson, Per, Johansson, Jesper January 2021 (has links)
This thesis investigates the development and use of software to measure respiratory frequency on cows using optronics and computer vision. It examines mainly two different strategies of image and signal processing and their performances for different input qualities. The effect of heat stress on dairy cows and the high transmission risk of pneumonia for calves make the investigation done during this thesis highly relevant since they both have the same symptom; increased respiratory frequency. The data set used in this thesis was of recorded dairy cows in different environments and from varying angles. Recordings, where the authors could determine a true breathing frequency by monitoring body movements, were accepted to the data set and used to test and develop the algorithms. One method developed in this thesis estimated the breathing rate in the frequency domain by Fast Fourier Transform and was named "N-point Fast Fourier Transform." The other method was called "Breathing Movement Zero-Crossing Counting." It estimated a signal in the time domain, whose fundamental frequency was determined by a zero-crossing algorithm as the breathing frequency. The result showed that both the developed algorithm successfully estimated a breathing frequency with a reasonable error margin for most of the data set. The zero-crossing algorithm showed the most consistent result with an error margin lower than 0.92 breaths per minute (BPM) for twelve of thirteen recordings. However, it is limited to recordings where the camera is placed above the cow. The N-point FFT algorithm estimated the breathing frequency with error margins between 0.44 and 5.20 BPM for the same recordings as the zero-crossing algorithm. This method is not limited to a specific camera angle but requires the cow to be relatively stationary to get accurate results. Therefore, it could be evaluated with the remaining three recordings of the data set. The error margins for these recordings were measured between 1.92 and 10.88 BPM. Both methods had execution time acceptable for implementation in real-time. It was, however, too incomplete a data set to determine any performance with recordings from different optronic devices.
94

Omnidirectional Optical Flow and Visual Motion Detection for Autonomous Robot Navigation

Stratmann, Irem 06 December 2007 (has links)
Autonomous robot navigation in dynamic environments requires robust detection of egomotion and independent motion. This thesis introduces a novel solution to the problem of visual independent motion detection by interpreting the topological features of omnidirectional dense optical flow field and determining the background - egomotion direction. The thesis solves the problem of visual independent motion detection in four interdependent subtasks. Independent Motion Detection can only be accomplished if the egomotion detection yields a relevant background motion model. Therefore, the problem of Egomotion Detection is solved first by exploiting the topological structures of the global omnidirectional optical flow fields. The estimation of the optical flow field is the prerequisite of the Egomotion-Detection task. Since the omnidirectional projection introduces non-affine deformations on the image plane, the known optical flow calculation methods have to be modified to yield accurate results. This modification is introduced here as another subtask, Omnidirectional Optical Flow Estimation. The experiments concerning the 3D omnidirectional scene capturing are grouped under the fourth subtask 3D Omni-Image Processing.
95

On the Use of Temporal Information for the Reconstruction of Magnetic Resonance Image Series

Klosowski, Jakob 26 February 2020 (has links)
No description available.
96

Object Trajectory Estimation Using Optical Flow

Liu, Shuo 01 May 2009 (has links)
Object trajectory tracking is an important topic in many different areas. It is widely used in robot technology, traffic, movie industry, and others. Optical flow is a useful method in the object tracking branch and it can calculate the motion of each pixel between two frames, and thus it provides a possible way to get the trajectory of objects. There are numerous papers describing the implementation of optical flow. Some results are acceptable, but in many projects, there are limitations. In most previous applications, because the camera is usually static, it is easy to apply optical flow to identify the moving targets in a scene and get their trajectories. When the camera moves, a global motion will be added to the local motion, which complicates the issue. In this thesis we use a combination of optical flow and image correlation to deal with this problem, and have good experimental results. For trajectory estimation, we incorporate a Kalman Filter with the optical flow. Not only can we smooth the motion history, but we can also estimate the motion into the next frame. The addition of a spatial-temporal filter improves the results in our later process.
97

Fast and accurate image registration. Applications to on-board satellite imaging. / Recalage rapide et précis des images. Applications pour l'imagerie satellite

Rais, Martin 09 December 2016 (has links)
Cette thèse commence par une étude approfondie des méthodes d’estimation de décalage sous-pixeliques rapides. Une comparaison complète est effectuée prenant en compte problèmes d’estimation de décalage existant dans des applications réelles, à savoir, avec différentes conditions de SNR, différentes grandeurs de déplacement, la non préservation de la contrainte de luminosité constante, l’aliasing et, surtout, la limitation des ressources de calcul. Sur la base de cette étude, en collaboration avec le CNES (l’agence spatiale française), deux problèmes qui sont cruciaux pour l’optique numérique des satellites d’observation de la terre sont analysés. Nous étudions d’abord le problème de correction de front d’onde dans le contexte de l’optique actif. Nous proposons un algorithme pour mesurer les aberrations de front d’onde sur un senseur de type Shack-Hartmann (SHWFS en anglais) en observant la terre. Nous proposons ici une revue de l’état de l’art des méthodes pour le SHWFS utilisé sur des scènes étendues (comme la terre) et concevons une nouvelle méthode pour améliorer l’estimation de front d’onde, en utilisant une approche basée sur l’équation du flot optique. Nous proposons également deux méthodes de validation afin d’assurer une estimation correcte du front d’onde sur les scènes étendues. Tandis que la première est basée sur une adaptation numérique des bornes inférieures (théoriques) pour le recalage d’images, la seconde méthode défausse rapidement les paysages en se basant sur la distribution des gradients. La deuxième application de satellite abordée est la conception numérique d’une nouvelle génération de senseur du type Time Delay Integration (TDI). Dans ce nouveau concept, la stabilisation active en temps réel du TDI est réalisée pour étendre considérablement le temps d’intégration, et donc augmenter le RSB des images. Les lignes du TDI ne peuvent pas être fusionnées directement par addition parce que leur position est modifiée par des microvibrations. Celles-ci doivent être compensées en temps réel avec une précision sous-pixellique. Nous étudions les limites fondamentales théoriques de ce problème et proposons une solution qui s’en approche. Nous présentons un système utilisant la convolution temporelle conjointement à une estimation en ligne du bruit de capteur, à une estimation de décalage basée sur les gradients et à une méthode multiimage non conventionnelle pour mesurer les déplacements globaux. Les résultats obtenus sont concluants sur les fronts de la précision et de la complexité. Pour des modèles de transformation plus complexes, une nouvelle méthode effectuant l’estimation précise et robuste des modèles de mise en correspondance des points d’intérêt entre images est proposée. La difficulté provenant de la présence de fausses correspondances et de mesures bruitées conduit à un échec des méthodes de régression traditionnelles. En vision par ordinateur, RANSAC est certainement la méthode la plus utilisée pour surmonter ces difficultés. RANSAC est capable de discriminer les fausses correspondances en générant de façon aléatoire des hypothèses et en vérifiant leur consensus. Cependant, sa réponse est basée sur la seule itération qui a obtenu le consensus le plus large, et elle ignore toutes les autres hypothèses. Nous montrons ici que la précision peut être améliorée en agrégeant toutes les hypothèses envisagées. Nous proposons également une stratégie simple qui permet de moyenner rapidement des transformations 2D, ce qui réduit le coût supplémentaire de calcul à quantité négligeable. Nous donnons des applications réelles pour estimer les transformations projectives et les transformations homographie + distorsion. En incluant une adaptation simple de LO-RANSAC dans notre cadre, l’approche proposée bat toutes les méthodes de l’état de l’art. Une analyse complète de l’approche proposée est réalisée, et elle démontre un net progrès en précision, stabilité et polyvalence. / This thesis starts with an in-depth study of fast and accurate sub-pixel shift estimationmethods. A full comparison is performed based on the common shift estimation problems occurring in real-life applications, namely, varying SNR conditions, differentdisplacement magnitudes, non-preservation of the brightness constancy constraint, aliasing, and most importantly, limited computational resources. Based on this study, in collaboration with CNES (the French space agency), two problems that are crucial for the digital optics of earth-observation satellites are analyzed.We first study the wavefront correction problem in an active optics context. We propose a fast and accurate algorithm to measure the wavefront aberrations on a Shack-HartmannWavefront Sensor (SHWFS) device observing the earth. We give here a review of state-of-the-art methods for SHWFS used on extended scenes (such as the earth) and devise a new method for improving wavefront estimation, based on a carefully refined approach based on the optical flow equation. This method takes advantage of the small shifts observed in a closed-loop wavefront correction system, yielding improved accuracy using fewer computational resources. We also propose two validation methods to ensure a correct wavefront estimation on extended scenes. While the first one is based on a numerical adaptation of the (theoretical) lower bounds of image registration, the second method rapidly discards landscapes based on the gradient distribution, inferred from the Eigenvalues of the structure tensor.The second satellite-based application that we address is the numerical design of a new generation of Time Delay Integration (TDI) sensor. In this new concept, active real-time stabilization of the TDI is performed to extend considerably the integration time, and therefore to boost the images SNR. The stripes of the TDI cannot be fused directly by addition because their position is altered by microvibrations. These must be compensated in real time using limited onboard computational resources with high subpixel accuracy. We study the fundamental performance limits for this problem and propose a real-time solution that nonetheless gets close to the theoretical limits. We introduce a scheme using temporal convolution together with online noise estimation, gradient-based shift estimation and a non-conventional multiframe method for measuring global displacements. The obtained results are conclusive on the fronts of accuracy and complexity and have strongly influenced the final decisions on the future configurations of Earth observation satellites at CNES.For more complex transformation models, a new image registration method performing accurate robust model estimation through point matches between images is proposed here. The difficulty coming from the presence of outliers causes the failure of traditional regression methods. In computer vision, RANSAC is definitely the most renowned method that overcomes such difficulties. It discriminates outliers by randomly generating minimalist sampled hypotheses and verifying their consensus over the input data. However, its response is based on the single iteration that achieved the largest inlier support, while discarding all other generated hypotheses. We show here that the resulting accuracy can be improved by aggregating all hypotheses. We also propose a simple strategy that allows to rapidly average 2D transformations, leading to an almost negligible extra computational cost. We give practical applications to the estimation of projective transforms and homography+distortion transforms. By including a straightforward adaptation of the locally optimized RANSAC in our framework, the proposed approach improves over every other available state-of-the-art method. A complete analysis of the proposed approach is performed, demonstrating its improved accuracy, stability and versatility.
98

Vision-based Measurement Methods for Schools of Fish and Analysis of their Behaviors / 動画像処理に基づく魚群の計測手法と行動解析

Terayama, Kei 23 March 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(人間・環境学) / 甲第19807号 / 人博第778号 / 新制||人||187(附属図書館) / 27||人博||778(吉田南総合図書館) / 32843 / 京都大学大学院人間・環境学研究科共生人間学専攻 / (主査)教授 立木 秀樹, 准教授 櫻川 貴司, 教授 日置 尋久, 教授 阪上 雅昭 / 学位規則第4条第1項該当 / Doctor of Human and Environmental Studies / Kyoto University / DFAM
99

Real-Time Optical Flow Sensor Design and its Application on Obstacle Detection

Wei, Zhaoyi 29 April 2009 (has links) (PDF)
Motion is one of the most important features describing an image sequence. Motion estimation has been widely applied in structure from motion, vision-based navigation and many other fields. However, real-time motion estimation remains a challenge because of its high computational expense. The traditional CPU-based scheme cannot satisfy the power, size and computation requirements in many applications. With the availability of new parallel architectures such as FPGAs and GPUs, applying these new technologies to computer vision tasks such as motion estimation has been an active research field in recent years. In this dissertation, FPGAs have been applied to real-time motion estimation for their outstanding properties in computation power, size, power consumption and reconfigurability. It is believed in this dissertation that simply migrating the software-based algorithms and mapping them to a specific architecture is not enough to achieve good performance. Accuracy is usually compromised as the cost of migration. Improvement and optimization at the algorithm level are critical to performance. To improve motion estimation on the FPGA platform and prove the effectiveness of the method, three main efforts have been made in the dissertation. First, a lightweight tensor-based algorithm has been designed which can be implemented in a fully pipelined structure. Key factors determining the algorithm performance are analyzed from the simulation results. Second, an improved algorithm is then developed based on the analyses of the first algorithm. This algorithm applies a ridge estimator and temporal smoothing in order to improve the accuracy. A structure composed of two pipelines is designed to accommodate the new algorithm while using reasonable hardware resources. Third, a hardware friendly algorithm is developed to analyze the optical flow field and detect obstacles for unmanned ground vehicle applications. The motion component is de-rotated, de-translated and postprocessed to detect obstacles. All these steps can be efficiently implemented in FPGAs. The characteristics of the FPGA architecture are taken into account in all development processes of these three algorithms. This dissertation also discusses some important perspectives for FPGA-based design in different chapters. These perspectives include software simulation and optimization at the algorithm development stage, hardware simulation and test bench design at the hardware development stage. They are important and particular for the development of FPGA-based computer vision algorithms. The experimental results have shown that the proposed motion estimation module can perform in real-time and achieve over 50% improvement in the motion estimation accuracy compared to the previous work in the literature. The results also show that the motion field can be reliably applied to obstacle detection tasks.
100

Real-Time Wind Estimation and Video Compression Onboard Miniature Aerial Vehicles

Rodriguez Perez, Andres Felipe 02 March 2009 (has links) (PDF)
Autonomous miniature air vehicles (MAVs) are becoming increasingly popular platforms for the collection of data about an area of interest for military and commercial applications. Two challenges that often present themselves in the process of collecting this data. First, winds can be a significant percentage of the MAV's airspeed and can affect the analysis of collected data if ignored. Second, the majority of MAV's video is transmitted using RF analog transmitters instead of the more desirable digital video due to the computational intensive compression requirements of digital video. This two-part thesis addresses these two challenges. First, this thesis presents an innovative method for estimating the wind velocity using an optical flow sensor mounted on a MAV. Using the flow of features measured by the optical flow sensor in the longitudinal and lateral directions, the MAV's crab-angle is estimated. By combining the crab-angle with measurements of ground track from GPS and the MAV's airspeed, the wind velocity is computed. Unlike other methods, this approach does not require the use of a “varying” path (flying at multiple headings) or the use of magnetometers. Second, this thesis presents an efficient and effective method for video compression by drastically reducing the computational cost of motion estimation. When attempting to compress video, motion estimation is usually more than 80% of the computation required to compress the video. Therefore, we propose to estimate the motion and reduce computation by using (1) knowledge of camera locations (from available MAV IMU sensor data) and (2) the projective geometry of the camera. Both of these methods are run onboard a MAV in real time and their effectiveness is demonstrated through simulated and experimental results.

Page generated in 0.0839 seconds