• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 224
  • 224
  • 75
  • 63
  • 60
  • 55
  • 43
  • 37
  • 37
  • 33
  • 30
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Video sub-pixel frame alignment

Zetterberg, Zackeus January 2024 (has links)
Video stabilization is an important aspect of video processing, especially for handheld devices where unintended camera movement can significantly degrade the resulting recording. This paper investigates four image based methods for video stabilization. The study explores the Lukas-Kanade, Inverse Compositional Lukas-Kanade, Farnebäck Optical Flow, and GMFlow methods, evaluating their sub-pixel accuracy, real-time performance, and robustness to in-frame motion such as a person walking in front of the camera. The results indicate that while all methods achieve sub-pixel precision, real-time execution on a mobile phone is not feasible with the current implementations. Furthermore, the methods exhibit varying levels of difficulty in handling in-frame motion, with RANSAC-based approaches partially compensating for non-camera-induced movement. The paper also discusses the potential of machine learning techniques, represented by GMFlow, in enhancing stabilization quality at the cost of computational complexity. The findings offer valuable insights for the development of more efficient and robust video stabilization solutions.
112

Identifying seedling patterns in time-lapse imaging

Gustafsson, Nils January 2024 (has links)
With changing climate, it is necessary to investigate how different plants are af- fected by drought, which is the starting point for this project. The proposed project aims to apply Machine Learning tools to learn predictive patterns of Scots pine seedlings in response to drought conditions by measuring the canopy area and growing rate of the seedlings presented in the time-lapse images. There are 5 different families of Scots Pine researched in this project, therefore 5 different sets of time-lapse images will be used as the data set. The research group has previously created a method for finding the canopy area and computing the growth rate for the different families. Furthermore, the seedlings rotate in an individual pattern each day, which could prove to affect their tolerance to drought according to the research group and is currently not being measured. Therefore, we propose a method using an object detection model, such as Mask R-CNN, to detect and find each seedling’s respective region of interest. With the obtained region of interest, the goal will be to apply an object-tracking algorithm, such as a Dense Optical Flow Algorithm. Using different methods, such as the Shi-Tomasi or Lucas Kanade method, we aim to find feature points and track motion between images to find the direction and velocity of the rotation for each seedling. The tracking algorithms will then be evaluated based on their performance in estimating the rotation features against an annotated sub-set of the time-lapse data set.
113

The visual search effectiveness of an unmanned ground vehicle operator within an optical flow condition

Colombo, Gian 01 January 2008 (has links)
Military reconnaissance and surveillance (R&S) missions are a crucial ingredient in overall mission safety and success. Proper intelligence will provide the ability to counter and neutralize enemy positions and attacks. Accurate detection and identification of threatening targets is one the driving forces behind good R&S intelligence. Understanding this ability and how it is affected by possible field conditions (i.e., motion) was the primary focus of this study. Motion is defined in the current study as the perceived forward self-motion of unmanned ground vehicle (UGV) also called optical flow. For the purpose of this examination, both optical flow and the presence of a foil were manipulated. I examined how optical flow, perceived from an on-board frontal camera on a UGV, affected target detection and identification. The interaction of optical flow and foil distraction, as well as the level of influence each independently had on target detection and identification, were the principle examinations. The effects that were originally predicted for the influence of optical flow on the visual search and identification task were not supported. Across manipulations of optical flow (i.e., present, not present), detection and identification were not significantly different, suggesting that the interruption rates of optical flow were ineffective at 29 frames per second (fps). The most interesting finding in the data set was, in fact, related to the recognition measure. Participants were asked to classify the tank in which they had detected in the environment as either a target or non-target. When under conditions of non-optical flow, participants correctly rejected a foil tank as not being their target more often than they accepted the target as their actual target. These results, however, only appeared to have had an effect in the non-optical flow condition. Further research should be conducted to properly evaluate the effects of varying frame rate interruption on the perception of optical flow. This will subsequently lead to an understanding of the phenomenon that is optical flow, and how it ultimately affects detection and identification tasks.
114

Etude quantitative du mouvement de la paroi du coeur à partir d'images ciné-IRM par des méthodes fréquentielles de flux optique / Quantitative study of cardiac wall motion from cine-MRI using frequency-based optical flow methods

Xavier Magnier, Marie 08 December 2010 (has links)
L'objectif de cette thèse est d'étudier la désynchronisation pariétale en ciné-IRM conventionnelle. La première partie de notre travail a consisté à quantifier les mouvements de la paroi du ventricule gauche du coeur directement à partir de séquences de ciné-IRM standards, de type SSFP avec une synchronisation rétrospective, utilisées pour l'étude de la fonction cardiaque. Les méthodes développées pour mesurer les déplacements dans les images sont basées sur des techniques fréquentielles de flux optique. Ces techniques semblent particulièrement adaptées aux particularités des IRM. Nous montrons en effet leur robustesse en présence de bruit ricien et de variations d'intensité des pixels au cours du temps, variations généralement provoquées par les mouvements du coeur à travers le plan, notamment en coupe petit axe. La seconde partie de notre travail a porté sur l'évaluation de la désynchronisation en ciné-IRM en coupe petit axe. Des courbes d'évolution des déplacements et des vitesses de la paroi au cours du temps ont été obtenues par un suivi de points d'intérêt localisés sur les différents segments du ventricule gauche proches de l'endocarde. Pour calculer les délais entre ces courbes, nous nous sommes appuyés sur les études effectuées en échocardiographie et avons proposé différents paramètres de mesure de la désynchronisation pour l'IRM. Ce travail a fait l'objet d'une étude clinique préliminaire comprenant des coeurs considérés comme normaux suite à l'examen clinique et de coeurs de patients à QRS larges et fins ne présentant pas de cardiopathie ischémique. Les mesures de désynchronisation en IRM cardiaque ont été comparées aux mesures pratiquées en échocardiographie. Les premiers résultats indiquent une corrélation entre les mesures d'échographie et d'IRM. La troisième partie de notre travail a consisté à étudier les mouvements du coeur directement à partir des images brutes des antennes en IRM multicanaux. Les algorithmes de flux optique développés et testés pour ce type d'images ont montré qu'il était possible d'estimer les mouvements myocardiques. Les premiers résultats semblent encourageants. Les résultats de l'étude préliminaire de l'asynchronisme intraventriculaire gauche en IRM sont prometteurs. La ciné-IRM cardiaque pourrait être une alternative à l'échocardiographie notamment pour les patients faiblement échogènes. La validation de cette technique de quantification de l'asynchronisme en IRM est un enjeu important. Une étude plus approfondie est en cours, notamment pour prédire la réponse à la CRT de patients sans cardiopathie ischémique présentant une désynchronisation mécanique à partir des facteurs de mesures en IRM et en échographie. / The aim of this thesis is to study parietal desynchronisation of the left ventricle from conventional cine-MRI. The first part of our work consisted in the quantification of the left ventricle wall motion of the heart directly from conventional retrospective SSFP type cine-MRI sequences used in the study of cardiac function. The developed methods for measuring displacements within the images are frequency-based optical flow methods. These techniques seem to be particularly adapted to MRI specificities. We have demonstrated their robustness in the event of Rician noise and pixel intensity variations as a function of time. These variations are often associated with the through-plane motion of the heart, in particular in the short-axis orientation. The second part of our work concerned the assessment of desynchronisation from short axis cine-MRI. Time-displacement and time-velocity curves of the heart wall were obtained from tracking points of interest localised on the left ventricle segments close to the endocardium. With respect to the quantification of the delay between curves, our work relied on research carried out in the field of echocardiography desynchronisation. Various parameters for the measurement of desynchronisation from cine-MRI were proposed. This work was the subject of a preliminary clinical study including patients considered as normal further based on clinical examination and patients having normal or prolonged QRS duration without ischemic heart disorder. The dyssynchrony measurements from cardiac MRI were compared to measurements obtained with echocardiography. The first results indicate a good correlation between echography and MRI measurements. The third part of our work consisted in studying heart wall motion directly from raw images from multicoil MRI. The developed optical flow algorithms were tested and they showed that it was possible to estimate myocardial movement. Preliminary results are encouraging. The results of the preliminary study of left intraventricular asynchronism from MRI are also promising. Cardiac cine-MRI could be an alternative to echocardiography in the case of weakly echogenic patients. The validation of this quantitative technique for asynchronism from MRI is of major interest. A more detailed study is in progress, in particular to predict the response of CRT (cardiac resynchronisation therapy) of patients without ischemic disorder presenting a mechanical desynchronisation from MRI and echographic parameters.
115

INCORPORATING MACHINE VISION IN PRECISION DAIRY FARMING TECHNOLOGIES

Shelley, Anthony N. 01 January 2016 (has links)
The inclusion of precision dairy farming technologies in dairy operations is an area of increasing research and industry direction. Machine vision based systems are suitable for the dairy environment as they do not inhibit workflow, are capable of continuous operation, and can be fully automated. The research of this dissertation developed and tested 3 machine vision based precision dairy farming technologies tailored to the latest generation of RGB+D cameras. The first system focused on testing various imaging approaches for the potential use of machine vision for automated dairy cow feed intake monitoring. The second system focused on monitoring the gradual change in body condition score (BCS) for 116 cows over a nearly 7 month period. Several proposed automated BCS systems have been previously developed by researchers, but none have monitored the gradual change in BCS for a duration of this magnitude. These gradual changes infer a great deal of beneficial and immediate information on the health condition of every individual cow being monitored. The third system focused on automated dairy cow feature detection using Haar cascade classifiers to detect anatomical features. These features included the tailhead, hips, and rear regions of the cow body. The features chosen were done so in order to aid machine vision applications in determining if and where a cow is present in an image or video frame. Once the cow has been detected, it must then be automatically identified in order to keep the system fully automated, which was also studied in a machine vision based approach in this research as a complimentary aspect to incorporate along with cow detection. Such systems have the potential to catch poor health conditions developing early on, aid in balancing the diet of the individual cow, and help farm management to better facilitate resources, monetary and otherwise, in an appropriate and efficient manner. Several different applications of this research are also discussed along with future directions for research, including the potential for additional automated precision dairy farming technologies, integrating many of these technologies into a unified system, and the use of alternative, potentially more robust machine vision cameras.
116

Estudo e implementação de dispositivo de calibração de velocímetro automotivo. / Study and implementation of a calibration device for a speedometer.

Celestino, Marcelo 02 March 2007 (has links)
O presente trabalho reúne elementos para a análise dos principais problemas e dificuldades inerentes à implementação de um dispositivo para a aferição de velocímetros de veículos automotores, de forma a atender a legislação prevista no código nacional de Trânsito, determinada pela Portaria n.º 115 de 1998, que define erros máximos permitidos para velocímetros de ± 5km/h para velocidades de até 100km/h e de ± 5% para velocidades acima de 100km/h. Inicialmente são apresentados e comparados os principais métodos de medição de velocidade abordados na literatura. A partir desta análise, é implementado um método inovador decorrente do efeito de escorrimento ou blur, onde, através do estudo das características e regularidades contidas em uma única imagem borrada, será determinada a velocidade da superfície em análise. A partir de uma bancada que simula o solo em movimento e, utilizando-se uma câmera CCD (Charge Coupled Device) e um frame grabber, fez-se a aquisição de imagens da superfície asfáltica em movimento. A informação de velocidade pôde então ser determinada, através da análise das regularidades contidas na imagem dinâmica devido ao efeito de escorrimento ou efeito blur. Obteve-se resultados suficientes para a aferição de velocímetros, com erros máximos abaixo de 5%. A técnica desenvolvida e avaliada na prática através de uma bancada que simula o asfalto em movimento, demonstrou uma precisão de 0,8% numa faixa de velocidades de 0 a 20km/h , de 1,5% numa faixa de velocidades de 20 a 60km/h e de 2,5% numa faixa de velocidades de 60km/h a 80km/h. Finalmente foram investigados os fatores preponderantes que limitaram os erros nesta ordem de grandeza. / This work gathers elements for the study and analysis of the main problems and difficulties inherents to the implementation of a device to the gauging of a speedometer of automotive vehicles, in order to meet the foreseen legislation described by the governmental decree nº 115 from 1998, which defines the maximum allowed error for speedometers of ± 5km/h for speeds till 100km/h and ± 5% for speeds above 100km/h. From this study, a new method, based on the blur effect, is proposed. In the method, the speed of the target surface is determined by analyzing the characteristics and regularities contained in a single blurred image. Starting from a device that simulates the soil movement and, using a CCD (Charge Coupled Device) camera and a frame grabber, the acquisition of images of the moving asphalt surface was done. The information of speed then could be determined, through the study of the regularities contained in the dynamic image due to the blur effect. The necessary results for the gauging have been achieved with success, with precision below 5%. The developed and evaluated technique in practice through a device that simulates the asphalt in movement has demonstrated a precision of 0,8% in a range of speeds from 0 to 20km/h, 1,5% in a range of speeds from 20km/h to 60km/h and 2,5% in a range of speeds from 60km/h to 80km/h. Finally, it was investigated the preponderant factors which have limited the errors in this order of greatness.
117

Hierarchical motion-based video analysis with applications to video post-production / Analyse de vidéo par décomposition hiérarchique du mouvement appliquée à la post-production vidéo

Pérez Rúa, Juan Manuel 04 December 2017 (has links)
Nous présentons dans ce manuscrit les méthodes développées et les résultats obtenus dans notre travail de thèse sur l'analyse du contenu dynamique de scène visuelle. Nous avons considéré la configuration la plus fréquente de vision par ordinateur, à savoir caméra monoculaire et vidéos naturelles de scène extérieure. Nous nous concentrons sur des problèmes importants généraux pour la vision par ordinateur et d'un intérêt particulier pour l'industrie cinématographique, dans le cadre de la post-production vidéo. Les problèmes abordés peuvent être regroupés en deux catégories principales, en fonction d'une interaction ou non avec les utilisateurs : l'analyse interactive du contenu vidéo et l'analyse vidéo entièrement automatique. Cette division est un peu schématique, mais elle est en fait liée aux façons dont les méthodes proposées sont utilisées en post-production vidéo. Ces deux grandes approches correspondent aux deux parties principales qui forment ce manuscrit, qui sont ensuite subdivisées en chapitres présentant les différentes méthodes que nous avons proposées. Néanmoins, un fil conducteur fort relie toutes nos contributions. Il s'agit d'une analyse hiérarchique compositionnelle du mouvement dans les scènes dynamiques. Nous motivons et expliquons nos travaux selon l'organisation du manuscrit résumée ci-dessous. Nous partons de l'hypothèse fondamentale de la présence d'une structure hiérarchique de mouvement dans la scène observée, avec un objectif de compréhension de la scène dynamique. Cette hypothèse s'inspire d'un grand nombre de recherches scientifiques sur la vision biologique et cognitive. Plus précisément, nous nous référons à la recherche sur la vision biologique qui a établi la présence d'unités sensorielles liées au mouvement dans le cortex visuel. La découverte de ces unités cérébrales spécialisées a motivé les chercheurs en vision cognitive à étudier comment la locomotion des animaux (évitement des obstacles, planification des chemins, localisation automatique) et d'autres tâches de niveau supérieur sont directement influencées par les perceptions liées aux mouvements. Fait intéressant, les réponses perceptuelles qui se déroulent dans le cortex visuel sont activées non seulement par le mouvement lui-même, mais par des occlusions, des désocclusions, une composition des mouvements et des contours mobiles. En outre, la vision cognitive a relié la capacité du cerveau à appréhender la nature compositionnelle du mouvement dans l'information visuelle à une compréhension de la scène de haut niveau, comme la segmentation et la reconnaissance d'objets. / The manuscript that is presented here contains all the findings and conclusions of the carried research in dynamic visual scene analysis. To be precise, we consider the ubiquitous monocular camera computer vision set-up, and the natural unconstrained videos that can be produced by it. In particular, we focus on important problems that are of general interest for the computer vision literature, and of special interest for the film industry, in the context of the video post-production pipeline. The tackled problems can be grouped in two main categories, according to the whether they are driven user interaction or not : user-assisted video processing tools and unsupervised tools for video analysis. This division is rather synthetic but it is in fact related to the ways the proposed methods are used inside the video post-production pipeline. These groups correspond to the main parts that form this manuscript, which are subsequently formed by chapters that explain our proposed methods. However, a single thread ties together all of our findings. This is, a hierarchical analysis of motion composition in dynamic scenes. We explain our exact contributions, together with our main motivations, and results in the following sections. We depart from a hypothesis that links the ability to consider a hierarchical structure of scene motion, with a deeper level of dynamic scene understanding. This hypothesis is inspired by plethora of scientific research in biological and psychological vision. More specifically, we refer to the biological vision research that established the presence of motion-related sensory units in the visual cortex. The discovery of these specialized brain units motivated psychological vision researchers to investigate how animal locomotion (obstacle avoidance, path planning, self-localization) and other higher-level tasks are directly influenced by motion-related percepts. Interestingly, the perceptual responses that take place in the visual cortex are activated not only by motion itself, but by occlusions, dis-occlusions, motion composition, and moving edges. Furthermore, psychological vision have linked the brain's ability to understand motion composition from visual information to high level scene understanding like object segmentation and recognition.
118

Structure from Motion Using Optical Flow Probability Distributions

Merrell, Paul Clark 18 March 2005 (has links)
Several novel structure from motion algorithms are presented that are designed to more effectively manage the problem of noise. In many practical applications, structure from motion algorithms fail to work properly because of the noise in the optical flow values. Most structure from motion algorithms implicitly assume that the noise is identically distributed and that the noise is white. Both assumptions are false. Some points can be track more easily than others and some points can be tracked more easily in a particular direction. The accuracy of each optical flow value can be quantified using an optical flow probability distribution. By using optical flow probability distributions in place of optical flow estimates in a structure from motion algorithm, a better understanding of the noise is developed and a more accurate solution is obtained. Two different methods of calculating the optical flow probability distributions are presented. The first calculates non-Gaussian probability distributions and the second calculates Gaussian probability distributions. Three different methods for calculating structure from motion are presented that use these probability distributions. The first method works on two frames and can handle any kind of noise. The second method works on two frames and is restricted to only Gaussian noise. The final method works on multiple frames and uses Gaussian noise. A simulation was created to directly compare the performance of methods that use optical flow probability distributions and methods that do not. The simulation results show that those methods which use the probability distributions better estimate the camera motion and the structure of the scene.
119

3D camera with built-in velocity measurement / 3D-kamera med inbyggd hastighetsmätning

Josefsson, Mattias January 2011 (has links)
In today's industry 3D cameras are often used to inspect products. The camera produces both a 3D model and an intensity image by capturing a series of profiles of the object using laser triangulation. In many of these setups a physical encoder is attached to, for example, the conveyor belt that the product is travelling on. The encoder is used to get an accurate reading of the speed that the product has when it passes through the laser. Without this, the output image from the camera can be distorted due to a variation in velocity. In this master thesis a method for integrating the functionality of this physical encoder into the software of the camera is proposed. The object is scanned together with a pattern, with the help of this pattern the object can be restored to its original proportions. / I dagens industri används ofta 3D-kameror för att inspektera produkter. Kameran producerar en 3D-modell samt en intensitetsbild genom att sätta ihop en serie av profilbilder av objektet som erhålls genom lasertriangulering. I många av dessa uppställningar används en fysisk encoder som återspeglar hastigheten på till exempel transportbandet som produkten ligger på. Utan den här encodern kan bilden som kameran fångar bli förvrängd på grund av hastighetsvariationer. I det här examensarbetet presenteras en metod för att integrera funktionaliteten av encodern in i kamerans mjukvara. För att göra detta krävs att ett mönster placeras längs med objektet som ska bli skannat. Mönstret återfinns i bilden fångad av kameran och med hjälp av detta mönster kan hastigheten bestämmas och objektets korrekta proportioner återställas.
120

Algorithm-Based Efficient Approaches for Motion Estimation Systems

Lee, Teahyung 14 November 2007 (has links)
Algorithm-Based Efficient Approaches for Motion Estimation Systems Teahyung Lee 121 pages Directed by Dr. David V. Anderson This research addresses algorithms for efficient motion estimation systems. With the growth of wireless video system market, such as mobile imaging, digital still and video cameras, and video sensor network, low-power consumption is increasingly desirable for embedded video systems. Motion estimation typically needs considerable computations and is the basic block for many video applications. To implement low-power video systems using embedded devices and sensors, a CMOS imager has been developed that allows low-power computations on the focal plane. In this dissertation efficient motion estimation algorithms are presented to complement this platform. In the first part of dissertation we propose two algorithms regarding gradient-based optical flow estimation (OFE) to reduce computational complexity with high performance. The first is a checkerboard-type filtering (CBTF) algorithm for prefiltering and spatiotemporal derivative calculations. Another one is spatially recursive OFE frameworks using recursive LS (RLS) and/or matrix refinement to reduce the computational complexity for solving linear system of derivative values of image intensity in least-squares (LS)-OFE. From simulation results, CBTF and spatially recursive OFE show improved computational efficiency compared to conventional approaches with higher or similar performance. In the second part of dissertation we propose a new algorithm for video coding application to improve motion estimation and compensation performance in the wavelet domain. This new algorithm is for wavelet-based multi-resolution motion estimation (MRME) using temporal aliasing detection (TAD) to enhance rate-distortion (RD) performance under temporal aliasing noise. This technique gives competitive or better performance in terms of RD compared to conventional MRME and MRME with motion vector prediction through median filtering.

Page generated in 0.0527 seconds