Spelling suggestions: "subject:"contour tracking"" "subject:"montour tracking""
1 |
FORCE VELOCITY CONTROL WITH NEURAL NETWORK COMPENSATION FOR CONTOUR TRACKING WITH PNEUMATIC ACTUATIONAbu Mallouh, Mohammed 17 September 2008 (has links)
Control of the contact force between a robot manipulator and a workpiece is critical for successful execution of tasks where the robot’s end effector must perform a contact operation along the contour of a workpiece. Representative tasks include polishing, grinding and deburring. Considerable research has been conducted on force control with electric robots. By contrast, little research has been conducted on force control with pneumatic robots. The later has the potential to be considerably cheaper. However, the compressible nature of air as the working fluid and relatively high friction means pneumatic robots are more difficult to control. The subject of this thesis is the design and testing of a controller that regulates the normal contact force and tangential velocity of the end effector of a pneumatic gantry robot while tracking the contour of a planar workpiece. Both experimental and simulation results are presented.
A PI Force Velocity (FV) controller for contour tracking was designed and tested experimentally. Three different workpiece edge geometries were studied: straight, inclined and curved. The tracking performance with the PI FV controller was comparable to the performance reported by other researchers with a similar controller implemented with an electric robot. This result confirms the potential of pneumatically actuated robots in force control applications.
A system model was developed and validated in order to investigate the parameters that affect performance. A good match between experiment and simulation was achieved when the friction of the z-axis cylinder was modeled with a Displacement Dependent Friction Model (DDFM) instead of a Velocity Dependent Friction Model (VDFM). Subsequently, a DDFM based friction compensator was designed and tested. However, it was found that performance could not be improved even with perfect friction compensation, due to the effects of system lag.
Two Neural Network (NN) compensators were designed to compensate for both the lag and friction in the system. Simulation results for straight and curved edges were used to examine the effectiveness of the NN compensators. The performance of the PI FV controller was found to improve significantly when a NN compensator was added. This result confirms the value of NN’s in control compensation for tracking applications with pneumatic actuation. / Thesis (Ph.D, Mechanical and Materials Engineering) -- Queen's University, 2008-09-16 12:29:44.679
|
2 |
Multiple Object Tracking with Occlusion HandlingSafri, Murtaza 16 February 2010 (has links)
Object tracking is an important problem with wide ranging applications. The purpose is to detect object contours and track their motion in a video. Issues of concern are to be able to map objects correctly between two frames, and to be able to track through occlusion. This thesis discusses a novel framework for the purpose of object tracking which is inspired from image registration and segmentation models. Occlusion of objects is also detected and handled in this framework in an appropriate manner.
The main idea of our tracking framework is to reconstruct the sequence of images
in the video. The process involves deforming all the objects in a given image frame,
called the initial frame. Regularization terms are used to govern the deformation of
the shape of the objects. We use elastic and viscous fluid model as the regularizer. The reconstructed frame is formed by combining the deformed objects with respect to the depth ordering. The correct reconstruction is selected by parameters that minimize
the difference between the reconstruction and the consecutive frame, called the target frame. These parameters provide the required tracking information, such as the contour of the objects in the target frame including the occluded regions. The regularization term restricts the deformation of the object shape in the occluded region and thus gives an estimate of the object shape in this region. The other idea is to use a segmentation model as a measure in place of the frame difference measure.
This is separate from image segmentation procedure, since we use the segmentation
model in a tracking framework to capture object deformation. Numerical examples are
presented to demonstrate tracking in simple and complex scenes, alongwith occlusion
handling capability of our model. Segmentation measure is shown to be more robust with regard to accumulation of tracking error.
|
3 |
Multiple Object Tracking with Occlusion HandlingSafri, Murtaza 16 February 2010 (has links)
Object tracking is an important problem with wide ranging applications. The purpose is to detect object contours and track their motion in a video. Issues of concern are to be able to map objects correctly between two frames, and to be able to track through occlusion. This thesis discusses a novel framework for the purpose of object tracking which is inspired from image registration and segmentation models. Occlusion of objects is also detected and handled in this framework in an appropriate manner.
The main idea of our tracking framework is to reconstruct the sequence of images
in the video. The process involves deforming all the objects in a given image frame,
called the initial frame. Regularization terms are used to govern the deformation of
the shape of the objects. We use elastic and viscous fluid model as the regularizer. The reconstructed frame is formed by combining the deformed objects with respect to the depth ordering. The correct reconstruction is selected by parameters that minimize
the difference between the reconstruction and the consecutive frame, called the target frame. These parameters provide the required tracking information, such as the contour of the objects in the target frame including the occluded regions. The regularization term restricts the deformation of the object shape in the occluded region and thus gives an estimate of the object shape in this region. The other idea is to use a segmentation model as a measure in place of the frame difference measure.
This is separate from image segmentation procedure, since we use the segmentation
model in a tracking framework to capture object deformation. Numerical examples are
presented to demonstrate tracking in simple and complex scenes, alongwith occlusion
handling capability of our model. Segmentation measure is shown to be more robust with regard to accumulation of tracking error.
|
4 |
Human extremity detection and its applications in action detection and recognitionYu, Qingfeng 02 June 2010 (has links)
It is proven that locations of internal body joints are sufficient visual cues to characterize human motion. In this dissertation I propose that locations of human extremities including heads, hands and feet provide powerful approximation to internal body motion. I propose detection of precise extremities from contours obtained from image segmentation or contour tracking. Junctions of medial axis of contours are selected as stars. Contour points with a local maximum distance to various stars are chosen as candidate extremities. All the candidates are filtered by cues including proximity to other candidates, visibility to stars and robustness to noise smoothing parameters. I present my applications of using precise extremities for fast human action detection and recognition. Environment specific features are built from precise extremities and feed into a block based Hidden Markov Model to decode the fence climbing action from continuous videos. Precise extremities are grouped into stable contacts if the same extremity does not move for a certain duration. Such stable contacts are utilized to decompose a long continuous video into shorter pieces. Each piece is associated with certain motion features to form primitive motion units. In this way the sequence is abstracted into more meaningful segments and a searching strategy is used to detect the fence climbing action. Moreover, I propose the histogram of extremities as a general posture descriptor. It is tested in a Hidden Markov Model based framework for action recognition. I further propose detection of probable extremities from raw images without any segmentation. Modeling the extremity as an image patch instead of a single point on the contour helps overcome the segmentation difficulty and increase the detection robustness. I represent the extremity patches with Histograms of Oriented Gradients. The detection is achieved by window based image scanning. In order to reduce computation load, I adopt the integral histograms technique without sacrificing accuracy. The result is a probability map where each pixel denotes probability of the patch forming the specific class of extremities. With a probable extremity map, I propose the histogram of probable extremities as another general posture descriptor. It is tested on several data sets and the results are compared with that of precise extremities to show the superiority of probable extremities. / text
|
5 |
Hierarchical motion-based video analysis with applications to video post-production / Analyse de vidéo par décomposition hiérarchique du mouvement appliquée à la post-production vidéoPérez Rúa, Juan Manuel 04 December 2017 (has links)
Nous présentons dans ce manuscrit les méthodes développées et les résultats obtenus dans notre travail de thèse sur l'analyse du contenu dynamique de scène visuelle. Nous avons considéré la configuration la plus fréquente de vision par ordinateur, à savoir caméra monoculaire et vidéos naturelles de scène extérieure. Nous nous concentrons sur des problèmes importants généraux pour la vision par ordinateur et d'un intérêt particulier pour l'industrie cinématographique, dans le cadre de la post-production vidéo. Les problèmes abordés peuvent être regroupés en deux catégories principales, en fonction d'une interaction ou non avec les utilisateurs : l'analyse interactive du contenu vidéo et l'analyse vidéo entièrement automatique. Cette division est un peu schématique, mais elle est en fait liée aux façons dont les méthodes proposées sont utilisées en post-production vidéo. Ces deux grandes approches correspondent aux deux parties principales qui forment ce manuscrit, qui sont ensuite subdivisées en chapitres présentant les différentes méthodes que nous avons proposées. Néanmoins, un fil conducteur fort relie toutes nos contributions. Il s'agit d'une analyse hiérarchique compositionnelle du mouvement dans les scènes dynamiques. Nous motivons et expliquons nos travaux selon l'organisation du manuscrit résumée ci-dessous. Nous partons de l'hypothèse fondamentale de la présence d'une structure hiérarchique de mouvement dans la scène observée, avec un objectif de compréhension de la scène dynamique. Cette hypothèse s'inspire d'un grand nombre de recherches scientifiques sur la vision biologique et cognitive. Plus précisément, nous nous référons à la recherche sur la vision biologique qui a établi la présence d'unités sensorielles liées au mouvement dans le cortex visuel. La découverte de ces unités cérébrales spécialisées a motivé les chercheurs en vision cognitive à étudier comment la locomotion des animaux (évitement des obstacles, planification des chemins, localisation automatique) et d'autres tâches de niveau supérieur sont directement influencées par les perceptions liées aux mouvements. Fait intéressant, les réponses perceptuelles qui se déroulent dans le cortex visuel sont activées non seulement par le mouvement lui-même, mais par des occlusions, des désocclusions, une composition des mouvements et des contours mobiles. En outre, la vision cognitive a relié la capacité du cerveau à appréhender la nature compositionnelle du mouvement dans l'information visuelle à une compréhension de la scène de haut niveau, comme la segmentation et la reconnaissance d'objets. / The manuscript that is presented here contains all the findings and conclusions of the carried research in dynamic visual scene analysis. To be precise, we consider the ubiquitous monocular camera computer vision set-up, and the natural unconstrained videos that can be produced by it. In particular, we focus on important problems that are of general interest for the computer vision literature, and of special interest for the film industry, in the context of the video post-production pipeline. The tackled problems can be grouped in two main categories, according to the whether they are driven user interaction or not : user-assisted video processing tools and unsupervised tools for video analysis. This division is rather synthetic but it is in fact related to the ways the proposed methods are used inside the video post-production pipeline. These groups correspond to the main parts that form this manuscript, which are subsequently formed by chapters that explain our proposed methods. However, a single thread ties together all of our findings. This is, a hierarchical analysis of motion composition in dynamic scenes. We explain our exact contributions, together with our main motivations, and results in the following sections. We depart from a hypothesis that links the ability to consider a hierarchical structure of scene motion, with a deeper level of dynamic scene understanding. This hypothesis is inspired by plethora of scientific research in biological and psychological vision. More specifically, we refer to the biological vision research that established the presence of motion-related sensory units in the visual cortex. The discovery of these specialized brain units motivated psychological vision researchers to investigate how animal locomotion (obstacle avoidance, path planning, self-localization) and other higher-level tasks are directly influenced by motion-related percepts. Interestingly, the perceptual responses that take place in the visual cortex are activated not only by motion itself, but by occlusions, dis-occlusions, motion composition, and moving edges. Furthermore, psychological vision have linked the brain's ability to understand motion composition from visual information to high level scene understanding like object segmentation and recognition.
|
6 |
Efficient numerical method for solution of L² optimal mass transport problemRehman, Tauseef ur 11 January 2010 (has links)
In this thesis, a novel and efficient numerical method is presented for the computation of the L² optimal mass transport mapping in two and three dimensions. The method uses a direct variational approach. A new projection to the constraint technique has been formulated that can yield a good starting point for the method as well as a second order accurate discretization to the problem. The numerical experiments demonstrate that the algorithm yields accurate results in a relatively small number of iterations that are mesh independent. In the first part of the thesis, the theory and implementation details of the proposed method are presented. These include the reformulation of the Monge-Kantorovich problem using a variational approach and then using a consistent discretization in conjunction with the "discretize-then-optimize" approach to solve the resulting discrete system of differential equations. Advanced numerical methods such as multigrid and adaptive mesh refinement have been employed to solve the linear systems in practical time for even 3D applications. In the second part, the methods efficacy is shown via application to various image processing tasks. These include image registration and morphing. Application of (OMT) to registration is presented in the context of medical imaging and in particular image guided therapy where registration is used to align multiple data sets with each other and with the patient. It is shown that an elastic warping methodology based on the notion of mass transport is quite natural for several medical imaging applications where density can be a key measure of similarity between different data sets e.g. proton density based imagery provided by MR. An application is also presented of the two dimensional optimal mass transport algorithm to compute diffeomorphic correspondence maps between curves for geometric interpolation in an active contour based visual tracking application.
|
7 |
Dynamic curve estimation for visual trackingNdiour, Ibrahima Jacques 03 August 2010 (has links)
This thesis tackles the visual tracking problem as a target contour estimation problem in the face of corrupted measurements. The major aim is to design robust recursive curve filters for accurate contour-based tracking. The state-space representation adopted comprises of a group component and a shape component describing the rigid motion and the non-rigid shape deformation respectively; filtering strategies on each component are then decoupled. The thesis considers two implicit curve descriptors, a classification probability field and the traditional signed distance function, and aims to develop an optimal probabilistic contour observer and locally optimal curve filters. For the former, introducing a novel probabilistic shape description simplifies the filtering problem on the infinite-dimensional space of closed curves to a series of point-wise filtering tasks. The definition and justification of a novel update model suited to the shape space, the derivation of the filtering equations and the relation to Kalman filtering are studied. In addition to the temporal consistency provided by the filtering, extensions involving distributed filtering methods are considered in order to maintain spatial consistency. For the latter, locally optimal closed curve filtering strategies involving curve velocities are explored. The introduction of a local, linear description for planar curve variation and curve uncertainty enables the derivation of a mechanism for estimating the optimal gain associated to the curve filtering process, given quantitative uncertainty levels. Experiments on synthetic and real sequences of images validate the filtering designs.
|
8 |
Visual Tracking of Deformation and Classification of Object Elasticity with Robotic Hand ProbingHui, Fei January 2017 (has links)
Performing tasks with a robotic hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The fundamental objectives of this research are to track the deformation of non-rigid objects under robotic hand manipulation using RGB-D data, and to automatically classify deformable objects as either rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The goal is not to attempt to formally model the material of the object, but rather employ a data-driven approach to make decisions based on the observed properties of the object, capture implicitly its deformation behavior, and support adaptive control of a robotic hand for other research in the future. The proposed approach advantageously combines color image and point cloud processing techniques, and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the detected contour as the object deforms. The research results demonstrate that a recognition rate over all categories of material of up to 98.3% is achieved based on the detected contour. When integrated in the control loop of a robotic hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.
|
9 |
A Real-Time and Automatic Ultrasound-Enhanced Multimodal Second Language Training System: A Deep Learning ApproachMozaffari Maaref, Mohammad Hamed 08 May 2020 (has links)
The critical role of language pronunciation in communicative competence is significant, especially for second language learners. Despite renewed awareness of the importance of articulation, it remains a challenge for instructors to handle the pronunciation needs of language learners. There are relatively scarce pedagogical tools for pronunciation teaching and learning, such as inefficient, traditional pronunciation instructions like listening and repeating. Recently, electronic visual feedback (EVF) systems (e.g., medical ultrasound imaging) have been exploited in new approaches in such a way that they could be effectively incorporated in a range of teaching and learning contexts. Evaluation of ultrasound-enhanced methods for pronunciation training, such as multimodal methods, has asserted that visualizing articulator’s system as biofeedback to language learners might improve the efficiency of articulation learning. Despite the recent successful usage of multimodal techniques for pronunciation training, manual works and human manipulation are inevitable in many stages of those systems. Furthermore, recognizing tongue shape in noisy and low-contrast ultrasound images is a challenging job, especially for non-expert users in real-time applications. On the other hand, our user study revealed that users could not perceive the placement of their tongue inside the mouth comfortably just by watching pre-recorded videos.
Machine learning is a subset of Artificial Intelligence (AI), where machines can learn by experiencing and acquiring skills without human involvement. Inspired by the functionality of the human brain, deep artificial neural networks learn from large amounts of data to perform a task repeatedly. Deep learning-based methods in many computer vision tasks have emerged as the dominant paradigm in recent years. Deep learning methods are powerful in automatic learning of a new job, while unlike traditional image processing methods, they are capable of dealing with many challenges such as object occlusion, transformation variant, and background artifacts. In this dissertation, we implemented a guided language pronunciation training system, benefits from the strengths of deep learning techniques. Our modular system attempts to provide a fully automatic and real-time language pronunciation training tool using ultrasound-enhanced augmented reality. Qualitatively and quantitatively assessments indicate an exceptional performance for our system in terms of flexibility, generalization, robustness, and autonomy outperformed previous techniques. Using our ultrasound-enhanced system, a language learner can observe her/his tongue movements during real-time speech, superimposed on her/his face automatically.
|
Page generated in 0.0804 seconds