51 |
Recovering Scale in Relative Pose and Target Model Estimation Using Monocular VisionTribou, Michael January 2009 (has links)
A combined relative pose and target object model estimation framework using a monocular camera as the primary feedback sensor has been designed and validated in a simulated robotic environment. The monocular camera is mounted on the end-effector of a robot manipulator and measures the image plane coordinates of a set of point features on a target workpiece object. Using this information, the relative position and orientation, as well as the geometry, of the target object are recovered recursively by a Kalman filter process. The Kalman filter facilitates the fusion of supplemental measurements from range sensors, with those gathered with the camera. This process allows the estimated system state to be accurate and recover the proper environment scale.
Current approaches in the research areas of visual servoing control and mobile robotics are studied in the case where the target object feature point geometry is well-known prior to the beginning of the estimation. In this case, only the relative pose of target object frames is estimated over a sequence of frames from a single monocular camera. An observability analysis was carried out to identify the physical configurations of camera and target object for which the relative pose cannot be recovered by measuring only the camera image plane coordinates of the object point features.
A popular extension to this is to concurrently estimate the target object model concurrently with the relative pose of the camera frame, a process known as Simultaneous Localization and Mapping (SLAM). The recursive framework was augmented to facilitate this larger estimation problem. The scale of the recovered solution is ambiguous using measurements from a single camera. A second observability analysis highlights more configurations for which the relative pose and target object model are unrecoverable from camera measurements alone. Instead, measurements which contain the global scale are required to obtain an accurate solution.
A set of additional sensors are detailed, including range finders and additional cameras. Measurement models for each are given, which facilitate the fusion of this supplemental data with the original monocular camera image measurements. A complete framework is then derived to combine a set of such sensor measurements to recover an accurate relative pose and target object model estimate.
This proposed framework is tested in a simulation environment with a virtual robot manipulator tracking a target object workpiece through a relative trajectory. All of the detailed estimation schemes are executed: the single monocular camera cases when the target object geometry are known and unknown, respectively; a two camera system in which the measurements are fused within the Kalman filter to recover the scale of the environment; a camera and point range sensor combination which provides a single range measurement at each system time step; and a laser pointer and camera hybrid which concurrently tries to measure the feature point images and a single range metric. The performance of the individual test cases are compared to determine which set of sensors is able to provide robust and reliable estimates for use in real world robotic applications.
Finally, some conclusions on the performance of the estimators are drawn and directions for future work are suggested. The camera and range finder combination is shown to accurately recover the proper scale for the estimate and warrants further investigation. Further, early results from the multiple monocular camera setup show superior performance to the other sensor combinations and interesting possibilities are available for wide field-of-view super sensors with high frame rates, built from many inexpensive devices.
|
52 |
An Active Camera Calibration Method with XYZ 3D TableTseng, Ching-I 12 July 2000 (has links)
The technology of machine vision is board applied in many aspects such as industrial inspection, medical image processing, remote sensing and nanotechnology. It recovers useful information about a scene from its two-dimension projections. This recovery requires the inversion of a many-to-one mapping. But we usually lose some important data for not exactly correct mapping. It might occur from lens distortion, rotation and perspective distortion, and non-ideal vision systems. Camera calibration can compensate for these ill-conditions. In this thesis I present an active calibration technique derived from Song¡¦s research (1996) for calibrating the camera intrinsic parameters. It requires no reference object and directly uses the images of the environment. We only have to control the camera acting a series translational motion by the XYZ 3-D table.
|
53 |
Pedestrian Detection on FPGAQureshi, Kamran January 2014 (has links)
Image processing emerges from the curiosity of human vision. To translate, what we see in everyday life and how we differentiate between objects, to robotic vision is a challenging and modern research topic. This thesis focuses on detecting a pedestrian within a standard format of an image. The efficiency of the algorithm is observed after its implementation in FPGA. The algorithm for pedestrian detection was developed using MATLAB as a base. To detect a pedestrian, a histogram of oriented gradient (HOG) of an image was computed. Study indicates that HOG is unique for different objects within an image. The HOG of a series of images was computed to train a binary classifier. A new image was then fed to the classifier in order to test its efficiency. Within the time frame of the thesis, the algorithm was partially translated to a hardware description using VHDL as a base descriptor. The proficiency of the hardware implementation was noted and the result exported to MATLAB for further processing. A hybrid model was created, in which the pre-processing steps were computed in FPGA and a classification performed in MATLAB. The outcome of the thesis shows that HOG is a very efficient and effective way to classify and differentiate different objects within an image. Given its efficiency, this algorithm may even be extended to video.
|
54 |
Design and development of an intelligent neuro-fuzzy system for automated visual inspectionKilling, Jonathan 18 July 2007 (has links)
This thesis presents work on the use of intelligent algorithms to solve a real-world machine vision problem in the automotive industry. Compared to commercial systems, the algorithm developed is both more robust to changes in the inspection environment and more intuitive for the user to configure. / Thesis (Master, Mechanical and Materials Engineering) -- Queen's University, 2007-07-12 10:24:43.367
|
55 |
Detection of insect and fungal damage and incidence of sprouting in stored wheat using near-infrared hyperspectral and digital color imagingSingh, Chandra B. 14 September 2009 (has links)
Wheat grain quality is defined by several parameters, of which insect and fungal damage and sprouting are considered important degrading factors. At present, Canadian wheat is inspected and graded manually by Canadian Grain Commission (CGC) inspectors at grain handling facilities or in the CGC laboratories. Visual inspection methods are time consuming, less efficient, subjective, and require experienced personnel. Therefore, an alternative, rapid, objective, accurate, and cost effective technique is needed for grain quality monitoring in real-time which can potentially assist or replace the manual inspection process. Insect-damaged wheat samples by the species of rice weevil (Sitophilus oryzae), lesser grain borer (Rhyzopertha dominica), rusty grain beetle (Cryptolestes ferrugineus), and red flour beetle (Tribolium castaneum); fungal-damaged wheat samples by the species of storage fungi namely Penicillium spp., Aspergillus glaucus, and Aspergillus niger; and artificially sprouted wheat kernels were obtained from the Cereal Research Centre (CRC), Agriculture and Agri-Food Canada, Winnipeg, Canada. Field damaged sprouted (midge-damaged) wheat kernels were procured from five growing locations across western Canada. Healthy and damaged wheat kernels were imaged using a long-wave near-infrared (LWNIR) and a short-wave near-infrared (SWNIR) hypersprctral imaging systems and an area scan color camera. The acquired images were stored for processing, feature extraction, and algorithm development. The LWNIR classified 85-100% healthy and insect-damaged, 95-100% healthy and fungal-infected, and 85-100% healthy and sprouted/midge-damaged kernels. The SWNIR classified 92.7-100%, 96-100% and 93.3-98.7% insect, fungal, and midge-damaged kernels, respectively (up to 28% false positive error). Color imaging correctly classified 93.7-99.3%, 98-100% and 94-99.7% insect, fungal, and midge-damaged kernels, respectively (up to 26% false positive error). Combined the SWNIR features with top color image features correctly classified 91-100%, 99-100% and 95-99.3% insect, fungal, and midge- damaged kernels, respectively with only less than 4% false positive error.
|
56 |
MACHINE VISION RECOGNITION OF THREE-DIMENSIONAL SPECULAR SURFACE FOR GAS TUNGSTEN ARC WELD POOLSong, Hongsheng 01 January 2007 (has links)
Observing the weld pool surface and measuring its geometrical parameters is a key to developing the next-generation intelligent welding machines that can mimic a skilled human welder who observes the weld pool to adjust welding parameters. It also provides us an effective way to improve and validate welding process modeling. Although different techniques have been applied in the past few years, the dynamic specular weld pool surface and the strong weld arc complicate these approaches and make the observation /measurement difficult. In this dissertation, a novel machine vision system to measure three-dimensional gas tungsten arc weld pool surface is proposed, which takes advantage of the specular reflection. In the designed system, a structured laser pattern is projected onto the weld pool surface and its reflection from the specular weld pool surface is imaged on an imaging plane and recorded by a high-speed camera with a narrow band-pass filter. The deformation of the molten weld pool surface distorts the reflected pattern. To derive the deformed surface of the weld pool, an image processing algorithm is firstly developed to detect the reflection points in the reflected laser pattern. The reflection points are then matched with their respective incident rays according to the findings of correspondence simulations. As a result, a set of matched incident ray and reflection point is obtained and an iterative surface reconstruction scheme is proposed to derive the three-dimensional pool surface from this set of data based on the reflection law. The reconstructed results proved the effectiveness of the system. Using the proposed surface measurement (machine vision) system, the fluctuation of weld pool surface parameters has been studied. In addition, analysis has been done to study the measurement error and identify error sources in order to improve the measurement system for better accuracy. The achievements in this dissertation provide a useful guidance for the further studies in on-line pool measurement and welding quality control.
|
57 |
High-Speed Probe Card Analysis Using Real-time Machine Vision and Image Restoration TechniqueShin, Bonghun January 2013 (has links)
There has been an increase in demand for the wafer-level test techniques that evaluates
the functionality and performance of the wafer chips before packaging them, since the
trend of integrated circuits are getting more sophisticated and smaller in size. Throughout
the wafer-level test, the semiconductor manufacturers are able to avoid the unnecessary
packing cost and to provide early feedback on the overall status of the chip fabrication
process. A probe card is a module of wafer-level tester, and can detect the defects of the
chip by evaluating the electric characteristics of the integrated circuits(IC's). A probe card
analyzer is popularly utilized to detect such a potential probe card failure which leads to
increase in the unnecessary manufacture expense in the packing process.
In this paper, a new probe card analysis strategy has been proposed. The main idea in
conducting probe card analysis is to operate the vision-based inspection on-the-
y while the
camera is continuously moving. In doing so, the position measurement from the encoder is
rstly synchronized with the image data that is captured by a controlled trigger signal under
the real-time setting. Because capturing images from a moving camera creates blurring in
the image, a simple deblurring technique has been employed to restore the original still
images from blurred ones. The main ideas are demonstrated using an experimental test
bed and a commercial probe card. The experimental test bed has been designed that
comprises a micro machine vision system and a real-time controller, the con guration of
the low cost experimental test bed is proposed. Compared to the existing stop-and-go
approach, the proposed technique can substantially enhance the inspection speed without
additional cost for major hardware change.
|
58 |
Detection of insect and fungal damage and incidence of sprouting in stored wheat using near-infrared hyperspectral and digital color imagingSingh, Chandra B. 14 September 2009 (has links)
Wheat grain quality is defined by several parameters, of which insect and fungal damage and sprouting are considered important degrading factors. At present, Canadian wheat is inspected and graded manually by Canadian Grain Commission (CGC) inspectors at grain handling facilities or in the CGC laboratories. Visual inspection methods are time consuming, less efficient, subjective, and require experienced personnel. Therefore, an alternative, rapid, objective, accurate, and cost effective technique is needed for grain quality monitoring in real-time which can potentially assist or replace the manual inspection process. Insect-damaged wheat samples by the species of rice weevil (Sitophilus oryzae), lesser grain borer (Rhyzopertha dominica), rusty grain beetle (Cryptolestes ferrugineus), and red flour beetle (Tribolium castaneum); fungal-damaged wheat samples by the species of storage fungi namely Penicillium spp., Aspergillus glaucus, and Aspergillus niger; and artificially sprouted wheat kernels were obtained from the Cereal Research Centre (CRC), Agriculture and Agri-Food Canada, Winnipeg, Canada. Field damaged sprouted (midge-damaged) wheat kernels were procured from five growing locations across western Canada. Healthy and damaged wheat kernels were imaged using a long-wave near-infrared (LWNIR) and a short-wave near-infrared (SWNIR) hypersprctral imaging systems and an area scan color camera. The acquired images were stored for processing, feature extraction, and algorithm development. The LWNIR classified 85-100% healthy and insect-damaged, 95-100% healthy and fungal-infected, and 85-100% healthy and sprouted/midge-damaged kernels. The SWNIR classified 92.7-100%, 96-100% and 93.3-98.7% insect, fungal, and midge-damaged kernels, respectively (up to 28% false positive error). Color imaging correctly classified 93.7-99.3%, 98-100% and 94-99.7% insect, fungal, and midge-damaged kernels, respectively (up to 26% false positive error). Combined the SWNIR features with top color image features correctly classified 91-100%, 99-100% and 95-99.3% insect, fungal, and midge- damaged kernels, respectively with only less than 4% false positive error.
|
59 |
Vision-Based Localization Using Reliable Fiducial MarkersStathakis, Alexandros 05 January 2012 (has links)
Vision-based positioning systems are founded primarily on a simple image processing technique of identifying various visually significant key-points in an image and relating them to a known coordinate system in a scene. Fiducial markers are used as a means of providing the scene with a number of specific key-points, or features, such that computer vision algorithms can quickly identify them within a captured image. This thesis proposes a reliable vision-based positioning system which utilizes a unique pseudo-random fiducial marker. The marker itself offers 49 distinct feature points to be used in position estimation. Detection of the designed marker occurs after an integrated process of adaptive thresholding, k-means clustering, color classification, and data verification. The ultimate goal behind such a system would be for indoor localization implementation in low cost autonomous mobile platforms.
|
60 |
Εφαρμογές αυτοματοποίησης ρομποτικών διαδικασιώνΜατθαιάκης, Αλέξανδρος-Στέργιος 13 October 2013 (has links)
Η σύγχρονη τάση επιβάλει στα ρομπότ να μπορούν πιο εύκολα να προσαρμοστούν στο περιβάλλον. Οι λόγοι που επιβάλλουν κάτι τέτοιο είναι κυρίως λόγοι οικονομίας χρήματος και χρόνου. Για να είναι δυνατόν να μπορεί να προσαρμοστεί το βέλτιστο τρόπο θα πρέπει να μπορεί να λαμβάνει σαν είσοδο πληροφορία από αυτό. Διάφορα είδη αισθητηρίων χρησιμοποιούνται για αυτό το σκοπό.
Για τη παρούσα διπλωματική εργασία αναπτύχθηκε ένα σύστημα στερεοσκοπικής όρασης. Η λογική για ένα τέτοιο σύστημα είναι να λειτουργεί σαν τα μάτια του ρομπότ, δηλαδή να εντοπίζει με αυτοματοποιημένο τρόπο τα σημεία στα οποία θα πρέπει να μεταβεί το ρομπότ για να ολοκληρώσει την εκάστοτε εργασία (πιάσιμο αντικειμένου, συγκόλληση κλπ).
Πειράματα αναπτύχθηκαν γύρω από το αντικείμενο αυτό, με σκοπό τη συλλογή μετρήσεων διαφόρων σημείων με χρήση κάμερας (3D), προκειμένου να οδηγηθεί το ρομπότ σε αυτά μέσω των σημείων που αναγνώριζε ο αλγόριθμος επεξεργασίας εικόνας. Η υποστήριξη των πειραμάτων αυτών έγινε από το σχεδιασμό και τον προγραμματισμό ενός συστήματος στερεοσκοπικής όρασης για την οδήγηση του ρομπότ στην ολοκλήρωση συγκολλήσεων των εκάστοτε σημείων. / The current trend in robotics concerns the easy adaption in the environment. The reasons for requiring this are mainly economic reasons and time effective processes. In order to help the robot to be adjusted optimally, the second should be able to take as input information from it. Various kinds of sensors are used for this purpose.
A stereo vision system was developed in this thesis. The rationale for such a system is to act as the eyes of the robot, e.g. to identify an automated way in which the robot should make motion planning in order to completer to each task (grasping the object, welding etc.).
Experiments were developed around the object, in order to collect measurements using different camera points (3D), in order to guide the robot through these points that recognized the image processing algorithm. The supporting of these experiments were the design and planning of a stereo vision system for driving the welding robot in completing the respective points
|
Page generated in 0.0697 seconds