• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 223
  • 223
  • 74
  • 63
  • 60
  • 55
  • 42
  • 37
  • 36
  • 33
  • 30
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Detekce anomálií v chování davu ve video-datech z dronu / Crowd Behavior Anomaly Detection in Drone Videodata

Bažout, David January 2021 (has links)
There have been lots of new drone applications in recent years. Drones are also often used in the field of national security forces. The aim of this work is to design and implement a tool intended for crowd behavior analysis in drone videodata. This tool ensures identification of suspicious behavior of persons and facilitates its localization. The main benefits include the design of a suitable video stabilization algorithm to stabilize small jitters, as well as trace back of the lost scene. Furthermore, two anomaly detectors were proposed, differing in the method of feature vector extraction and background modeling. Compared to the state of the art approaches, they achieved comparable results, but at the same time they brought the possibility of online data processing.
142

Neuromorphic computation using event-based sensors : from algorithms to hardware implementations / Calcul neuromorphique à l'aide de capteurs évènementiels : algorithmes et implémentations matérielles

Haessig, Germain 14 September 2018 (has links)
Cette thèse porte sur l’implémentation d’algorithmes événementiels, en utilisant, dans un premier temps, des données provenant d’une rétine artificielle, mimant le fonctionnement de la rétine humaine, pour ensuite évoluer vers tous types de signaux événementiels. Ces signaux événementiels sont issus d’un changement de paradigme dans la représentation du signal, offrant une grande plage dynamique de fonctionnement, une résolution temporelle importante ainsi qu’une compression native du signal. Sera notamment étudiée la réalisation d’un dispositif de création de cartes de profondeur monoculaires à haute fréquence, un algorithme de tri cellulaire en temps réel, ainsi que l’apprentissage non supervisé pour de la reconnaissance de formes. Certains de ces algorithmes (détection de flot optique, construction de cartes de profondeur en stéréovision) seront développés en parallèle sur des plateformes de simulation neuromorphiques existantes (SpiNNaker, TrueNorth), afin de proposer une chaîne de traitement de l’information entièrement neuromorphique, du capteur au calcul, à faible coût énergétique. / This thesis is about the implementation of neuromorphic algorithms, using, as a first step, data from a silicon retina, mimicking the human eye’s behavior, and then evolve towards all kind of event-based signals. These eventbased signals are coming from a paradigm shift in the data representation, thus allowing a high dynamic range, a precise temporal resolution and a sensor-level data compression. Especially, we will study the development of a high frequency monocular depth map generator, a real-time spike sorting algorithm for intelligent brain-machine interfaces, and an unsupervised learning algorithm for pattern recognition. Some of these algorithms (Optical flow detection, depth map construction from stereovision) will be in the meantime developed on available neuromorphic platforms (SpiNNaker, TrueNorth), thus allowing a fully neuromorphic pipeline, from sensing to computing, with a low power budget.
143

EXPERIMENTAL STUDIES ON FREE JET OF MATCH ROCKETS AND UNSTEADY FLOW OF HOUSEFLIES

Angel David Lozano Galarza (10757814) 01 June 2021 (has links)
<p>The aerodynamics of insect flight is not well understood despite it has been extensively investigated with various techniques and methods. Its complexities mainly have two folds: complex flow behavior and intricate wing morphology. The complex flow behavior in insect flight are resulted from flow unsteadiness and three-dimensional effects. However, most of the experimental studies on insect flight were performed with 2D flow measurement techniques whereas the 3D flow measurement techniques are still under developing. Even with the most advanced 3D flow measurement techniques, it is still impossible to measure the flow field closed to the wings and body. On the other hand, the intricate wing morphology complicates the experimental studies with mechanical flapping wings and make mechanical models difficult to mimic the flapping wing motion of insects. Therefore, to understand the authentic flow phenomena and associated aerodynamics of insect flight, it is inevitable to study the actual flying insects. </p> <p>In this thesis, a recently introduced technique of schlieren photography is first tested on free jet of match rockets with a physics based optical flow method to explore its potential of flow quantification of unsteady flow. Then the schlieren photography and optical flow method are adapted to tethered and feely flying houseflies to investigate the complex wake flow and structures. In the end, a particle tracking velocimetry system: Shake the Box system, is utilized to resolve the complex wake flow on a tethered house fly and to acquire some preliminary 3D flow field data</p>
144

Experimental and Numerical Studies on the Projective Dye Visualization Velocimetry in a Squared Vertical Tube

Johnson, Mark Bradley 24 May 2023 (has links)
No description available.
145

An Optical Flow Implementation Comparison Study

Bodily, John M. 12 March 2009 (has links) (PDF)
Optical flow is the apparent motion of brightness patterns within an image scene. Algorithms used to calculate the optical flow for a sequence of images are useful in a variety of applications, including motion detection and obstacle avoidance. Typical optical flow algorithms are computationally intense and run slowly when implemented in software, which is problematic since many potential applications of the algorithm require real-time calculation in order to be useful. To increase performance of the calculation, optical flow has recently been implemented on FPGA and GPU platforms. These devices are able to process optical flow in real-time, but are generally less accurate than software solutions. For this thesis, two different optical flow algorithms have been implemented to run on a GPU using NVIDIA's CUDA SDK. Previous FPGA implementations of the algorithms exist and are used to make a comparison between the FPGA and GPU devices for the optical flow calculation. The first algorithm calculates optical flow using 3D gradient tensors and is able to process 640x480 images at about 238 frames per second with an average angular error of 12.1 degrees when run on a GeForce 8800 GTX GPU. The second algorithm uses increased smoothing and a ridge regression calculation to produce a more accurate result. It reduces the average angular error by about 2.3x, but the additional computational complexity of the algorithm also reduces the frame rate by about 1.5x. Overall, the GPU outperforms the FPGA in frame rate and accuracy, but requires much more power and is not as flexible. The most significant advantage of the GPU is the reduced design time and effort needed to implement the algorithms, with the FPGA designs requiring 10x to 12x the effort.
146

A Comparison Between KeyFrame Extraction Methods for Clothing Recognition

Lindgren, Gabriel January 2023 (has links)
With an ever so high video consumption, applications and services need to use smart approaches to make the experience better for their users. By using key frames from a video, useful information can be retrieved regarding the entire video, and used for better explaining the content. At present, many key frame extraction (KFE) methods aim at selecting multiple frames from videos composed of multiple scenes, and coming from various contexts. In this study a proposed key frame extraction method that extracts a single frame for further clothing recognition purposes is implemented and compared against two other methods. The proposed method utilizes the state-of-the-art object detector YOLO (You Only Look Once) to ensure the extracted key frames contain people, and is referred to as YKFE (YOLO-based Key Frame Extraction). YKFE is then compared against the simple and baseline method named MFE (Middle Frame Extraction) which always extracts the middle frame of the video, and the famous optical flow based method referred to as Wolf KFE, that extracts frames having the lowest amount of optical flow. The YOLO model is pre-trained and further fine tuned on a custom dataset. Furthermore, three versions of the YKFE method are developed and compared, each utilizing different measurements in order to select the best key frame, the first one being optical flow, the second aspect ratio, and the third by combining both optical flow and aspect ratio. At last, three proposed metrics: RDO (Rate of Distinguishable Outfits), RSAR (Rate of Successful API Returns), and AET (Average Extraction Time) were used to evaluate and compare the performance of the methods against each other on two sets of test data containing 100 videos each. The results show that YKFE yields more reliable results while taking significantly more time than both MFE and Wolf KFE. However, both MFE and Wolf KFE do not consider whether frames contain people or not, meaning the context in which the methods are used is of significant importance for the rate of successful key frame extractions. Finally as an experiment, a method named Slim YKFE was developed as a combination of both MFE and YKFE, resulting in a substantially reduced extraction time while still maintaining high accuracy. / Med en ständigt växande videokonsumption så måste applikationer och tjänster använda smarta tillvägagångssätt för att göra upplevelsen så bra som möjligt för dess användare. Genom att använda nyckelbilder från en video kan användbar information erhållas om hela videon och användas för att bättre förklara dess innehåll. För nuvarande fokuserar många metoder för nyckelbildutvinning (KFE) på att utvinna ett flertal bilder från videoklipp komponerade av flera scener och många olika kontext. I denna studie föreslås och implementeras en ny nyckelbildutvinningsmetod som enbart extraherar en bild med syfte att användas av ett API för klädigenkänning. Denna metod jämförs sedan med två andra redan existerande metoder. Den föreslagna metoden använder sig av det moderna objektdetekteringssystemet YOLO (You Only Look Once) för att säkerställa förekomsten av personer i de extraherade nyckelbilderna och hänvisas som YKFE (YOLO-based Key Frame Extraction). YKFE jämförs sedan med en enkel basmetod kallad MFE (Middle Frame Extraction) som alltid extraherar den mittersta bilden av en video, och en känd metod som extraherar de bilder med lägst optiskt flöde, kallad Wolf KFE. YOLO-modellen är förtränad och vidare finjusterad på ett eget dataset. Fortsättningsvis utvecklas tre versioner av YKFE-metoden där varje version använder olika värden för att välja ut den bästa nyckelbilden. Den första versionen använder optiskt flöde, den andra använder bildförhållande och den tredje kombinerar både optiskt flöde och bildförhållande. Slutligen används tre föreslagna mättyper; RDO (Rate of Distinguishable Outfits), RSAR (Rate of Successful API Returns), and AET (Average Extraction Time) för att evaluera och jämföra metodernas prestanda mot varandra på två uppsättningar testdata bestånde av 100 videoklipp vardera. Resultaten visar att YKFE ger de mest stabila resultaten samtidigt som den har en betydligt längre exekveringstid än både MFE och Wolf KFE. Däremot betraktar inte MFE och Wolf YKFE bildernas innehåll vilket betyder att kontextet där dessa metoder används är av stor betydelse för antalet lyckade nyckelbildextraheringar. Som ett experiment så utvecklas även en metod kallad Slim YKFE, som kombinerar både MFE och YKFE som resulterade i en betydande minskning av exekveringstid samtidigt som antal lyckade extraheringar förblev hög.
147

Active Regulation of Speed During a Simulated Low-altitude Flight Task: Altitude Matters!

Bennett, April M. 27 December 2006 (has links)
No description available.
148

Hybrid marker-less camera pose tracking with integrated sensor fusion

Moemeni, Armaghan January 2014 (has links)
This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications.
149

Contrôle visuel du déplacement en trajectoire courbe : approche sensorimotrice du rôle structurant du flux optique

Authié, Colas 20 October 2011 (has links)
L'objectif principal de cette thèse est de mettre en évidence le rôle de la direction et du mouvement de la tête et des yeux dans la perception et le contrôle de trajectoires courbes, en référence aux propriétés des flux optiques générés par notre déplacement dans un environnement stable. Nous utilisons deux méthodes expérimentales : une approche comportementale sur simulateur de conduite et une approche psychophysique permettant d'évaluer les capacités d'observateurs humains à percevoir la direction du mouvement propre. Ces méthodes combinées visent à mettre en évidence les effets comportementaux d'une perception active de la direction du mouvement propre. L'introduction dresse l'état de la recherche sur les informations disponibles et les stratégies perceptives impliquées dans la prise de virage en conduite automobile. Ainsi, l'accent est à la fois mis sur le rôle du point de corde (dans le cas étudié d'un déplacement sur une route délimitée) et plus généralement sur le rôle du flux optique (description de la transformation apparente de l'environnement visuel lors du déplacement), soulignant notre capacité à interpréter spatialement le mouvement, mais aussi le caractère indissociable de la motricité et de la perception. Nous abordons ensuite le rôle des mouvements combinés des yeux et de la tête, dans une perspective fonctionnelle du contrôle du mouvement.Dans un premier chapitre expérimental, nous analysons les mouvements d'orientation de la tête lors de la prise de virage en conduite simulée. Nous montrons que les mouvements de la tête sont indépendants de la manipulation du volant et qu'ils participent activement à l'orientation du regard vers le point de corde. Dans un second chapitre expérimental, nous nous attachons à décrire les mouvements combinés des yeux et de la tête, en lien avec la géométrie de l'environnement routier. Dans une troisième partie, nous analysons plus finement le comportement du regard en lien avec la direction du point de corde et la vitesse locale du flux optique. Nous montrons à la fois que le point de corde correspond à un minimum local de vitesse optique et que la composante globale du flux optique induit un nystagmus optocinétique systématique. Enfin, lors d'une quatrième étude psychophysique, nous nous attachons à décrire finement l'effet de la variation de la direction du regard sur la discrimination de la direction du mouvement propre. Nous montrons que les seuils de discrimination de trajectoire sont minimaux lorsque le regard est orienté vers une zone de vitesse de flux minimal. Nous proposons finalement un modèle de détection de la trajectoire, basé sur une fraction de Weber des vitesses de flux fovéales, qui prédit très précisément les seuils expérimentaux. Les stratégies observées d'orientation du regard (combinaison des mouvements des yeux et de la tête) vers le point de corde sont compatibles avec une sélection optimale de l'information présente dans le flux optique. / The main purpose of this dissertation is to determine the role of the direction and movement of the eyes and the head in the perception and control of self-motion in curved trajectories, with respect to the properties of the optical flows generated in a stable environment. To do so, we used two experimental methods: a psychophysical approach which allows to assess human observers' ability to perceive the direction of self-motion; and a behavior-based approach on a driving simulator. The two methods combined should help to highlight active perception of self-motion.The introduction reviews the current knowledge of perceptuo-motor strategies during curve driving. In this context, we put a stress on both (1.) the particular role of the tangent point -- in the driving situation on a delimited road, and on the role of the optic flow in general (apparent transformation of the optic array during self-motion), emphasizing the capability of humans to spatially interpret the movement; and (2.) on the duality between movement and perception. We then address the role of head-and-eye combined movements, in a functional perspective of the control of self-motion. In a first experimental section, we analyze the oriented movements of the head in simulated curve driving. We demonstrate that head movements are independent from the handling of the steering wheel, and that they actively participate in the gaze orientation toward the tangent point.In a second experimental section, we set out to describe the combined movements of head and eyes, with respect to the geometry of the road environment. In a third section, we analyze in more details gaze behavior as a function of the tangent point direction and of the local speed of optical flow. We demonstrate that the tangent point corresponds to a local minimum of optic flow speed and that the global component of the optic flow induces a systematic optokinetic nystagmus. In a fourth section involving a psychophysical study, we scrutinize the effect of varying gaze direction on the discrimination of the direction of self-motion. We show that the trajectory discrimination thresholds are minimal when the gaze is oriented toward an area of minimum flow speed. We finally propose a model of trajectory change detection, relying on a Weber fraction of foveal flow speeds, predicting the experimental thresholds very precisely. The gaze orientation strategies we have observed (combination of head and eye movements) toward the tangent point are compatible with this model and with the hypothesis of an active an optimal selection of the information contained in the optical flow.
150

New formulation of optical flow for turbulence estimation

Chen, Xu 08 October 2015 (has links)
Le flot optique est un outil, prometteur et puissant, pour estimer le mouvement des objets de différentes natures, solides ou fluides. Il permet d’extraire les champs de vitesse à partir d’une séquence d’images. Dans cette étude, nous développons la méthode du flot optique pour récupérer, d’une manière précise, le champ de vitesse des mouvements turbulents incompressibles. L’estimation de turbulence consiste à minimiser une fonction d’énergie composée par un terme d’observation et un terme de régularisation. L’équation de transport d’un scalaire passif est alors employée pour représenter le terme d’observation. Cependant, dans le cas où le nombre de Reynolds est grand, et, à cause des contraintes optiques, l’image n’est pas pleinement résolue pour prendre en compte la physique de toutes les échelles de la turbulence. Pour compléter les informations manquantes liées aux physiques des petites échelles, nous adoptons une démarche similaire à celle de Large Eddy Simulation (LES), et, proposons d’utiliser le modèle mixte afin de tenir compte de l’interaction entre les grandes échelles et celles non-résolues. Quant au terme de régularisation, il se repose sur l’équation de continuité des fluides incompressibles. Les tests à l’aide des images synthétiques et expérimentales de la turbulence bi-dimensionnelle - des données des cas test de la communauté du flot optique -, ont non seulement validé notre démarche, mais montrent une amélioration significative des qualités des champs de vitesses extraites. Le cas du flot optique, en 3D, relève encore du défi dans le cas de l’estimation des champs de vitesse de la turbulence. D’une part, contrairement au 2D où il existe des cas tests bien établis, il n’existe pas, à notre connaissance, des séquences d’images 3D référentielles permettant de tester notre démarche et méthode. D’autre part, l’augmentation du coût d’estimation demande des algorithme adaptés. Ainsi, nous sommes amené à utiliser la simulation numérique directe d’écoulement turbulent en présence d’un scalaire passif, pour générer des données de scalaires afin d’évaluer la performance du flot optique. Nous prêtons également attention à l’effet du nombre de Schmidt qui caractérise la relation entre la diffusion moléculaire scalaire et la dissipation de turbulence. Les tests sont ensuite effectués avec cette base de données numériques. Les résultats montrent que la précision de l’estimation augmente avec des nombres de Schmidt plus élevés. Par ailleurs, l’influence du terme de régularisation est aussi étudié au travers deux équations qui se différencient par l’ordre spatial des dérivées partielles. Les résultats numériques montrent que l’équation avec un terme de régularisation de seconde-ordre est meilleure que celle de premier-ordre. / The method of optical flow is a powerful tool for motion estimation. It is able to extract the dense velocity field from image sequence. In this study, we employ this method to retrieve precisely the incompressible turbulent motions. For 2D turbulence estimation, it consists in minimizing an objective function constituted by an observation term and a regularization one. The observation term is based on the transport equation of a passive scalar field. For non-fully resolved scalar images, we propose to use the mixed model in large eddy simulation (LES) to determine the interaction between large-scale motions and the unresolved ones. The regularization term is based on the continuity equation of 2D incompressible flows. Evaluation of the proposed formulation is done over synthetic and experimental images. In addition, we extend optical flow to three dimensional and multiple scalar databases are generated with direct numerical simulation (DNS) in order to evaluate the performance of optical flow in the 3D context. We propose two formulations differing by the order of the regularizer. Numerical results show that the formulation with second-order regularizer outperforms its first-order counterpart. We also draw special attention to the effect of Schmidt number, which characterizes the ratio between the molecular diffusion of the scalar and the dissipation of the turbulence. Results show that the precision of the estimation increases as the Schmidt number increases. Overall, optical flow has showcased its capability of reconstructing the turbulent flow with excellent accuracy. This method has all the potential and attributes to become an effective flow measurement approach in fluid mechanics community.

Page generated in 0.1002 seconds