Spelling suggestions: "subject:"opticalflow"" "subject:"opticallow""
141 |
Uso de fluxo óptico na odometria visual aplicada a robótica / Using the optical flow in the visual odometry applied roboticsAraújo, Darla Caroline da Silva, 1989- 26 August 2018 (has links)
Orientador: Paulo Roberto Gardel Kurka / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-26T21:38:28Z (GMT). No. of bitstreams: 1
Araujo_DarlaCarolinedaSilva_M.pdf: 5678583 bytes, checksum: a6ed9886369705a8853f15d431565a3d (MD5)
Previous issue date: 2015 / Resumo: O presente trabalho descreve um método de odometria visual empregando a técnica de fluxo óptico, para estimar o movimento de um robô móvel, através de imagens digitais capturadas de duas câmeras estereoscópicas nele fixadas. Busca-se assim a construção de um mapa para a localização do Robô. Esta proposta, além de alternativa ao cálculo autônomo de movimento realizado por outros tipos de sensores como GPS, laser, sonares, utiliza uma técnica de processamento óptico de grande eficiência computacional. Foi construído um ambiente 3D para simulação do movimento do robô e captura das imagens necessárias para estimar sua trajetória e verificar a acurácia da técnica proposta. Utiliza-se a técnica de fluxo óptico de Lucas Kanade na identificação de características em imagens. Os resultados obtidos neste trabalho são de grande importância para os estudos de navegação robótica / Abstract: This work describes a method of visual odometry using the optical flow technique to estimate the motion of a mobile robot, through digital images captured from two stereoscopic cameras fixed on it, in order to obtain a map of location of the robot. This proposal is an alternative to the autonomous motion calculation performed by other types of sensors such as GPS, laser, sonar, and uses an optical processing technique of high computational efficiency. To check the accuracy of the technique it was necessary to build a 3D environment to simulate the robot performing a trajectory and capture the necessary images to estimate the trajectory. The optical flow technique of Lucas Kanade was used for identifying features in the images. The results of this work are of great importance for future robotic navigation studies / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestra em Engenharia Mecânica
|
142 |
Detekce anomálií v chování davu ve video-datech z dronu / Crowd Behavior Anomaly Detection in Drone VideodataBažout, David January 2021 (has links)
There have been lots of new drone applications in recent years. Drones are also often used in the field of national security forces. The aim of this work is to design and implement a tool intended for crowd behavior analysis in drone videodata. This tool ensures identification of suspicious behavior of persons and facilitates its localization. The main benefits include the design of a suitable video stabilization algorithm to stabilize small jitters, as well as trace back of the lost scene. Furthermore, two anomaly detectors were proposed, differing in the method of feature vector extraction and background modeling. Compared to the state of the art approaches, they achieved comparable results, but at the same time they brought the possibility of online data processing.
|
143 |
Neuromorphic computation using event-based sensors : from algorithms to hardware implementations / Calcul neuromorphique à l'aide de capteurs évènementiels : algorithmes et implémentations matériellesHaessig, Germain 14 September 2018 (has links)
Cette thèse porte sur l’implémentation d’algorithmes événementiels, en utilisant, dans un premier temps, des données provenant d’une rétine artificielle, mimant le fonctionnement de la rétine humaine, pour ensuite évoluer vers tous types de signaux événementiels. Ces signaux événementiels sont issus d’un changement de paradigme dans la représentation du signal, offrant une grande plage dynamique de fonctionnement, une résolution temporelle importante ainsi qu’une compression native du signal. Sera notamment étudiée la réalisation d’un dispositif de création de cartes de profondeur monoculaires à haute fréquence, un algorithme de tri cellulaire en temps réel, ainsi que l’apprentissage non supervisé pour de la reconnaissance de formes. Certains de ces algorithmes (détection de flot optique, construction de cartes de profondeur en stéréovision) seront développés en parallèle sur des plateformes de simulation neuromorphiques existantes (SpiNNaker, TrueNorth), afin de proposer une chaîne de traitement de l’information entièrement neuromorphique, du capteur au calcul, à faible coût énergétique. / This thesis is about the implementation of neuromorphic algorithms, using, as a first step, data from a silicon retina, mimicking the human eye’s behavior, and then evolve towards all kind of event-based signals. These eventbased signals are coming from a paradigm shift in the data representation, thus allowing a high dynamic range, a precise temporal resolution and a sensor-level data compression. Especially, we will study the development of a high frequency monocular depth map generator, a real-time spike sorting algorithm for intelligent brain-machine interfaces, and an unsupervised learning algorithm for pattern recognition. Some of these algorithms (Optical flow detection, depth map construction from stereovision) will be in the meantime developed on available neuromorphic platforms (SpiNNaker, TrueNorth), thus allowing a fully neuromorphic pipeline, from sensing to computing, with a low power budget.
|
144 |
EXPERIMENTAL STUDIES ON FREE JET OF MATCH ROCKETS AND UNSTEADY FLOW OF HOUSEFLIESAngel David Lozano Galarza (10757814) 01 June 2021 (has links)
<p>The
aerodynamics of insect flight is not well understood despite it has been
extensively investigated with various techniques and methods. Its complexities
mainly have two folds: complex flow behavior and intricate wing morphology. The
complex flow behavior in insect flight are resulted from flow unsteadiness and
three-dimensional effects. However, most of the experimental studies on insect
flight were performed with 2D flow measurement techniques whereas the 3D flow
measurement techniques are still under developing. Even with the most advanced
3D flow measurement techniques, it is still impossible to measure the flow
field closed to the wings and body. On the other hand, the intricate wing
morphology complicates the experimental studies with mechanical flapping wings
and make mechanical models difficult to mimic the flapping wing motion of
insects. Therefore, to understand the authentic flow phenomena and associated
aerodynamics of insect flight, it is inevitable to study the actual flying
insects. </p>
<p>In
this thesis, a recently introduced technique of schlieren photography is first
tested on free jet of match rockets with a physics based optical flow method to
explore its potential of flow quantification of unsteady flow. Then the
schlieren photography and optical flow method are adapted to tethered and feely
flying houseflies to investigate the complex wake flow and structures. In the
end, a particle tracking velocimetry system: Shake the Box system, is utilized
to resolve the complex wake flow on a tethered house fly and to acquire some
preliminary 3D flow field data</p>
|
145 |
Experimental and Numerical Studies on the Projective Dye Visualization Velocimetry in a Squared Vertical TubeJohnson, Mark Bradley 24 May 2023 (has links)
No description available.
|
146 |
An Optical Flow Implementation Comparison StudyBodily, John M. 12 March 2009 (has links) (PDF)
Optical flow is the apparent motion of brightness patterns within an image scene. Algorithms used to calculate the optical flow for a sequence of images are useful in a variety of applications, including motion detection and obstacle avoidance. Typical optical flow algorithms are computationally intense and run slowly when implemented in software, which is problematic since many potential applications of the algorithm require real-time calculation in order to be useful. To increase performance of the calculation, optical flow has recently been implemented on FPGA and GPU platforms. These devices are able to process optical flow in real-time, but are generally less accurate than software solutions. For this thesis, two different optical flow algorithms have been implemented to run on a GPU using NVIDIA's CUDA SDK. Previous FPGA implementations of the algorithms exist and are used to make a comparison between the FPGA and GPU devices for the optical flow calculation. The first algorithm calculates optical flow using 3D gradient tensors and is able to process 640x480 images at about 238 frames per second with an average angular error of 12.1 degrees when run on a GeForce 8800 GTX GPU. The second algorithm uses increased smoothing and a ridge regression calculation to produce a more accurate result. It reduces the average angular error by about 2.3x, but the additional computational complexity of the algorithm also reduces the frame rate by about 1.5x. Overall, the GPU outperforms the FPGA in frame rate and accuracy, but requires much more power and is not as flexible. The most significant advantage of the GPU is the reduced design time and effort needed to implement the algorithms, with the FPGA designs requiring 10x to 12x the effort.
|
147 |
A Comparison Between KeyFrame Extraction Methods for Clothing RecognitionLindgren, Gabriel January 2023 (has links)
With an ever so high video consumption, applications and services need to use smart approaches to make the experience better for their users. By using key frames from a video, useful information can be retrieved regarding the entire video, and used for better explaining the content. At present, many key frame extraction (KFE) methods aim at selecting multiple frames from videos composed of multiple scenes, and coming from various contexts. In this study a proposed key frame extraction method that extracts a single frame for further clothing recognition purposes is implemented and compared against two other methods. The proposed method utilizes the state-of-the-art object detector YOLO (You Only Look Once) to ensure the extracted key frames contain people, and is referred to as YKFE (YOLO-based Key Frame Extraction). YKFE is then compared against the simple and baseline method named MFE (Middle Frame Extraction) which always extracts the middle frame of the video, and the famous optical flow based method referred to as Wolf KFE, that extracts frames having the lowest amount of optical flow. The YOLO model is pre-trained and further fine tuned on a custom dataset. Furthermore, three versions of the YKFE method are developed and compared, each utilizing different measurements in order to select the best key frame, the first one being optical flow, the second aspect ratio, and the third by combining both optical flow and aspect ratio. At last, three proposed metrics: RDO (Rate of Distinguishable Outfits), RSAR (Rate of Successful API Returns), and AET (Average Extraction Time) were used to evaluate and compare the performance of the methods against each other on two sets of test data containing 100 videos each. The results show that YKFE yields more reliable results while taking significantly more time than both MFE and Wolf KFE. However, both MFE and Wolf KFE do not consider whether frames contain people or not, meaning the context in which the methods are used is of significant importance for the rate of successful key frame extractions. Finally as an experiment, a method named Slim YKFE was developed as a combination of both MFE and YKFE, resulting in a substantially reduced extraction time while still maintaining high accuracy. / Med en ständigt växande videokonsumption så måste applikationer och tjänster använda smarta tillvägagångssätt för att göra upplevelsen så bra som möjligt för dess användare. Genom att använda nyckelbilder från en video kan användbar information erhållas om hela videon och användas för att bättre förklara dess innehåll. För nuvarande fokuserar många metoder för nyckelbildutvinning (KFE) på att utvinna ett flertal bilder från videoklipp komponerade av flera scener och många olika kontext. I denna studie föreslås och implementeras en ny nyckelbildutvinningsmetod som enbart extraherar en bild med syfte att användas av ett API för klädigenkänning. Denna metod jämförs sedan med två andra redan existerande metoder. Den föreslagna metoden använder sig av det moderna objektdetekteringssystemet YOLO (You Only Look Once) för att säkerställa förekomsten av personer i de extraherade nyckelbilderna och hänvisas som YKFE (YOLO-based Key Frame Extraction). YKFE jämförs sedan med en enkel basmetod kallad MFE (Middle Frame Extraction) som alltid extraherar den mittersta bilden av en video, och en känd metod som extraherar de bilder med lägst optiskt flöde, kallad Wolf KFE. YOLO-modellen är förtränad och vidare finjusterad på ett eget dataset. Fortsättningsvis utvecklas tre versioner av YKFE-metoden där varje version använder olika värden för att välja ut den bästa nyckelbilden. Den första versionen använder optiskt flöde, den andra använder bildförhållande och den tredje kombinerar både optiskt flöde och bildförhållande. Slutligen används tre föreslagna mättyper; RDO (Rate of Distinguishable Outfits), RSAR (Rate of Successful API Returns), and AET (Average Extraction Time) för att evaluera och jämföra metodernas prestanda mot varandra på två uppsättningar testdata bestånde av 100 videoklipp vardera. Resultaten visar att YKFE ger de mest stabila resultaten samtidigt som den har en betydligt längre exekveringstid än både MFE och Wolf KFE. Däremot betraktar inte MFE och Wolf YKFE bildernas innehåll vilket betyder att kontextet där dessa metoder används är av stor betydelse för antalet lyckade nyckelbildextraheringar. Som ett experiment så utvecklas även en metod kallad Slim YKFE, som kombinerar både MFE och YKFE som resulterade i en betydande minskning av exekveringstid samtidigt som antal lyckade extraheringar förblev hög.
|
148 |
Active Regulation of Speed During a Simulated Low-altitude Flight Task: Altitude Matters!Bennett, April M. 27 December 2006 (has links)
No description available.
|
149 |
Hybrid marker-less camera pose tracking with integrated sensor fusionMoemeni, Armaghan January 2014 (has links)
This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications.
|
150 |
Contrôle visuel du déplacement en trajectoire courbe : approche sensorimotrice du rôle structurant du flux optiqueAuthié, Colas 20 October 2011 (has links)
L'objectif principal de cette thèse est de mettre en évidence le rôle de la direction et du mouvement de la tête et des yeux dans la perception et le contrôle de trajectoires courbes, en référence aux propriétés des flux optiques générés par notre déplacement dans un environnement stable. Nous utilisons deux méthodes expérimentales : une approche comportementale sur simulateur de conduite et une approche psychophysique permettant d'évaluer les capacités d'observateurs humains à percevoir la direction du mouvement propre. Ces méthodes combinées visent à mettre en évidence les effets comportementaux d'une perception active de la direction du mouvement propre. L'introduction dresse l'état de la recherche sur les informations disponibles et les stratégies perceptives impliquées dans la prise de virage en conduite automobile. Ainsi, l'accent est à la fois mis sur le rôle du point de corde (dans le cas étudié d'un déplacement sur une route délimitée) et plus généralement sur le rôle du flux optique (description de la transformation apparente de l'environnement visuel lors du déplacement), soulignant notre capacité à interpréter spatialement le mouvement, mais aussi le caractère indissociable de la motricité et de la perception. Nous abordons ensuite le rôle des mouvements combinés des yeux et de la tête, dans une perspective fonctionnelle du contrôle du mouvement.Dans un premier chapitre expérimental, nous analysons les mouvements d'orientation de la tête lors de la prise de virage en conduite simulée. Nous montrons que les mouvements de la tête sont indépendants de la manipulation du volant et qu'ils participent activement à l'orientation du regard vers le point de corde. Dans un second chapitre expérimental, nous nous attachons à décrire les mouvements combinés des yeux et de la tête, en lien avec la géométrie de l'environnement routier. Dans une troisième partie, nous analysons plus finement le comportement du regard en lien avec la direction du point de corde et la vitesse locale du flux optique. Nous montrons à la fois que le point de corde correspond à un minimum local de vitesse optique et que la composante globale du flux optique induit un nystagmus optocinétique systématique. Enfin, lors d'une quatrième étude psychophysique, nous nous attachons à décrire finement l'effet de la variation de la direction du regard sur la discrimination de la direction du mouvement propre. Nous montrons que les seuils de discrimination de trajectoire sont minimaux lorsque le regard est orienté vers une zone de vitesse de flux minimal. Nous proposons finalement un modèle de détection de la trajectoire, basé sur une fraction de Weber des vitesses de flux fovéales, qui prédit très précisément les seuils expérimentaux. Les stratégies observées d'orientation du regard (combinaison des mouvements des yeux et de la tête) vers le point de corde sont compatibles avec une sélection optimale de l'information présente dans le flux optique. / The main purpose of this dissertation is to determine the role of the direction and movement of the eyes and the head in the perception and control of self-motion in curved trajectories, with respect to the properties of the optical flows generated in a stable environment. To do so, we used two experimental methods: a psychophysical approach which allows to assess human observers' ability to perceive the direction of self-motion; and a behavior-based approach on a driving simulator. The two methods combined should help to highlight active perception of self-motion.The introduction reviews the current knowledge of perceptuo-motor strategies during curve driving. In this context, we put a stress on both (1.) the particular role of the tangent point -- in the driving situation on a delimited road, and on the role of the optic flow in general (apparent transformation of the optic array during self-motion), emphasizing the capability of humans to spatially interpret the movement; and (2.) on the duality between movement and perception. We then address the role of head-and-eye combined movements, in a functional perspective of the control of self-motion. In a first experimental section, we analyze the oriented movements of the head in simulated curve driving. We demonstrate that head movements are independent from the handling of the steering wheel, and that they actively participate in the gaze orientation toward the tangent point.In a second experimental section, we set out to describe the combined movements of head and eyes, with respect to the geometry of the road environment. In a third section, we analyze in more details gaze behavior as a function of the tangent point direction and of the local speed of optical flow. We demonstrate that the tangent point corresponds to a local minimum of optic flow speed and that the global component of the optic flow induces a systematic optokinetic nystagmus. In a fourth section involving a psychophysical study, we scrutinize the effect of varying gaze direction on the discrimination of the direction of self-motion. We show that the trajectory discrimination thresholds are minimal when the gaze is oriented toward an area of minimum flow speed. We finally propose a model of trajectory change detection, relying on a Weber fraction of foveal flow speeds, predicting the experimental thresholds very precisely. The gaze orientation strategies we have observed (combination of head and eye movements) toward the tangent point are compatible with this model and with the hypothesis of an active an optimal selection of the information contained in the optical flow.
|
Page generated in 0.0475 seconds