Spelling suggestions: "subject:"cachine vision"" "subject:"amachine vision""
41 |
An Automatic Image Recognition System for Winter Road Condition MonitoringOmer, Raqib 17 February 2011 (has links)
Municipalities and contractors in Canada and other parts of the world rely on road
surface condition information during and after a snow storm to optimize maintenance operations
and planning. With an ever increasing demand for safer and more sustainable road
network there is an ever increasing demand for more reliable, accurate and up-to-date road
surface condition information while working with the limited available resources. Such high
dependence on road condition information is driving more and more attention towards analyzing
the reliability of current technology as well as developing new and more innovative
methods for monitoring road surface condition. This research provides an overview of the
various road condition monitoring technologies in use today. A new machine vision based
mobile road surface condition monitoring system is proposed which has the potential to
produce high spatial and temporal coverage. The proposed approach uses multiple models
calibrated according to local pavement color and environmental conditions potentially
providing better accuracy compared to a single model for all conditions. Once fully developed,
this system could potentially provide intermediate data between the more reliable
xed monitoring stations, enabling the authorities with a wider coverage without a heavy
extra cost. The up to date information could be used to better plan maintenance strategies
and thus minimizing salt use and maintenance costs.
|
42 |
Pose estimation and relative orbit determination of a nearby target microsatellite using passive imageryCropp, Alexander January 2001 (has links)
A method of estimating the relative position and orientation of a known target satellite is presented, using only passive imagery. Such a method is intended as a prelude to a system required in future autonomous satellite docking missions. Using a single monocular image, and utilising knowledge of the target spacecraft, estimation of the target's six relative rotation and translation parameters with respect to the camera are found. Pose estimation is divided into modular sections. Each frame is processed to detect the major lines in the image, and correspondence information between detected lines and a-priori target information is estimated, resulting in a list of line-to-model correspondences. This correspondence information is used to estimate the pose of the target required to produce such a correspondence list. Multiple possible pose estimates are generated and tested, where each estimate contains the three rotation and translation parameters. The best estimates go through to the least-squares minimisation phase, which reduces estimation error and provides statistical information for multi-frame filtering. The final estimate vector and covariance matrix is the end result for each frame. Estimates of the target location over time allow the relative orbit parameters of the target to be estimated. Location estimates are filtered to fit an orbit model based on Hill's Equations, and statistical information gathered with each estimate is including in the filter process when estimating the orbit parameters. These orbit parameters allow prediction of the target location with time, which will enable mission planning and safety analysis of potential orbit manoeuvres in close proximity to the target. Testing is carried out by a detailed simulation system, which renders accurate images of the target satellite given the true pose of the target with respect to the inertial reference frame. The rendering software used takes into account lighting conditions, reflections, shadowing, specularity, and other considerations, and further post-processing is involved to produce a realistic image. Target position over time is modelled on orbit dynamics with respect to a defined inertial frame. Transformation between inertial, target, and camera frames of reference are dealt with, to transform a rotating target in the inertial frame to the apparent rotation in the camera frame.
|
43 |
Monitoring and Measuring Tool Wear Using an Online Machine Vision SetupSassi, Amine January 2022 (has links)
In manufacturing, monitoring machine health is an important step when implementing Industry 4.0 and ensures effective machining operations and minimal downtime. Monitoring the health of cutting tools during a machining process helps contain the faults associated with gradual tool wear, because they can be tracked and responded to as wear worsens. Left unchecked, tool failures can lead to more severe problems, such as dimensional and surface issues with machined workpieces and lower overall productivity during the machining process.
This research explores the use of a machine vision setup used internally by the McMaster Manufacturing Research Institute (MMRI) in their three lathe machines. This machine vision setup provides a direct indication of the tool's maximum flank wear (VBmax), which, according to ISO 3685:1993(E), is set to be 300 µm.
Also investigated was the use of image processing and analysis methods to determine the flank wear without removing the tool from the machine. This new, in-machine vision setup is intended to replace the use of an external optical microscope, which requires extended downtime between cutting passes. As a result of this replacement, the experimentation downtime was decreased by around 98.6%, leading to the experiment time to decrease from 5 weeks or more to just a couple of days. In addition, the difference in measurement between a commonly used optical microscope and in-machine vision setup was found to be ±3µm. / Thesis / Master of Science (MSc)
|
44 |
Context sensitive cardiac x-ray imaging: a machine vision approach to x-ray dose controlKengyelics, S.M., Gislason-Lee, Amber J., Keeble, C., Magee, D.R., Davies, A.G. 21 September 2015 (has links)
Yes / Modern cardiac x-ray imaging systems regulate their radiation output based on the thickness of the
patient to maintain an acceptable signal at the input of the x-ray detector. This approach does not account for the
context of the examination or the content of the image displayed. We have developed a machine vision algorithm
that detects iodine-filled blood vessels and fits an idealized vessel model with the key parameters of contrast,
diameter, and linear attenuation coefficient. The spatio-temporal distribution of the linear attenuation coefficient
samples, when appropriately arranged, can be described by a simple linear relationship, despite the complexity
of scene information. The algorithm was tested on static anthropomorphic chest phantom images under different
radiographic factors and 60 dynamic clinical image sequences. It was found to be robust and sensitive to
changes in vessel contrast resulting from variations in system parameters. The machine vision algorithm
has the potential of extracting real-time context sensitive information that may be used for augmenting existing
dose control strategies. / Project PANORAMA, funded by grants from Belgium, Italy, France, the Netherlands, United Kingdom, and the ENIAC Joint Undertaking.
|
45 |
Machine vision image quality measurement in cardiac x-ray imagingKengyelics, S.M., Gislason-Lee, Amber J., Keeble, C., Magee, D., Davies, A.G. 16 March 2015 (has links)
Yes / The purpose of this work is to report on a machine vision approach for the automated measurement of x-ray
image contrast of coronary arteries lled with iodine contrast media during interventional cardiac procedures.
A machine vision algorithm was developed that creates a binary mask of the principal vessels of the coronary
artery tree by thresholding a standard deviation map of the direction image of the cardiac scene derived using a
Frangi lter. Using the mask, average contrast is calculated by tting a Gaussian model to the greyscale pro le
orthogonal to the vessel centre line at a number of points along the vessel. The algorithm was applied to sections
of single image frames from 30 left and 30 right coronary artery image sequences from di erent patients. Manual
measurements of average contrast were also performed on the same images. A Bland-Altman analysis indicates
good agreement between the two methods with 95% con dence intervals -0.046 to +0.048 with a mean bias of
0.001. The machine vision algorithm has the potential of providing real-time context sensitive information so
that radiographic imaging control parameters could be adjusted on the basis of clinically relevant image content. / Project PANORAMA, funded by grants from Belgium, Italy, France, the Netherlands, and the United Kingdom, and the ENIAC Joint Undertaking.
|
46 |
VISION BASED REAL-TIME MONITORING AND CONTROL OF METAL TRANSFER IN LASER ENHANCED GAS METALShao, Yan 01 January 2013 (has links)
Laser enhanced gas metal arc welding (GMAW) is a novel welding process where a laser is applied to provide an auxiliary detaching force to help detach the droplet such that welds may be made in gas tungsten arc welding high quality at GMAW high speeds. The current needed to generate the electromagnetic (detaching) force is thus reduced. The reduction in the current helps reduce the impact on the weld pool and over-heat fumes/smokes. However, in the previous studies, a continuous laser is applied. Since the auxiliary is only needed each time the droplet needs to be detached and the detachment time is relatively short in the transfer cycle, the laser energy is greatly wasted in the rest of the transfer cycle. In addition, the unnecessary application of the laser on the droplet causes additional over-heat fumes. Hence, this study proposes to use a pulsed laser such that the peak pulse is applied only when the droplet is ready to detach. To this end, the state of the droplet development needs to be closely monitored in real-time. Since the metal transfer is an ultra-high speed process and the most reliable method to monitor should be based on visual feedback, a high imaging system has been proposed to monitor the real-time development of the droplet. A high-speed image processing system has been developed to real-time extract the developing droplet. A closed-loop control system has been established to use the real-time imaging processing result on the monitoring of the developing droplet to determine if the laser peak pulse needs to be applied. Experiments verified the effectiveness of the proposed methods and established system. A controlled novel process – pulsed laser-enhanced GMAW - is thus established for possible applications in producing high-quality welds at GMAW speeds.
|
47 |
Vision-Based Localization Using Reliable Fiducial MarkersStathakis, Alexandros 05 January 2012 (has links)
Vision-based positioning systems are founded primarily on a simple image processing technique of identifying various visually significant key-points in an image and relating them to a known coordinate system in a scene. Fiducial markers are used as a means of providing the scene with a number of specific key-points, or features, such that computer vision algorithms can quickly identify them within a captured image. This thesis proposes a reliable vision-based positioning system which utilizes a unique pseudo-random fiducial marker. The marker itself offers 49 distinct feature points to be used in position estimation. Detection of the designed marker occurs after an integrated process of adaptive thresholding, k-means clustering, color classification, and data verification. The ultimate goal behind such a system would be for indoor localization implementation in low cost autonomous mobile platforms.
|
48 |
Vision-Based Localization Using Reliable Fiducial MarkersStathakis, Alexandros 05 January 2012 (has links)
Vision-based positioning systems are founded primarily on a simple image processing technique of identifying various visually significant key-points in an image and relating them to a known coordinate system in a scene. Fiducial markers are used as a means of providing the scene with a number of specific key-points, or features, such that computer vision algorithms can quickly identify them within a captured image. This thesis proposes a reliable vision-based positioning system which utilizes a unique pseudo-random fiducial marker. The marker itself offers 49 distinct feature points to be used in position estimation. Detection of the designed marker occurs after an integrated process of adaptive thresholding, k-means clustering, color classification, and data verification. The ultimate goal behind such a system would be for indoor localization implementation in low cost autonomous mobile platforms.
|
49 |
Recovering Scale in Relative Pose and Target Model Estimation Using Monocular VisionTribou, Michael January 2009 (has links)
A combined relative pose and target object model estimation framework using a monocular camera as the primary feedback sensor has been designed and validated in a simulated robotic environment. The monocular camera is mounted on the end-effector of a robot manipulator and measures the image plane coordinates of a set of point features on a target workpiece object. Using this information, the relative position and orientation, as well as the geometry, of the target object are recovered recursively by a Kalman filter process. The Kalman filter facilitates the fusion of supplemental measurements from range sensors, with those gathered with the camera. This process allows the estimated system state to be accurate and recover the proper environment scale.
Current approaches in the research areas of visual servoing control and mobile robotics are studied in the case where the target object feature point geometry is well-known prior to the beginning of the estimation. In this case, only the relative pose of target object frames is estimated over a sequence of frames from a single monocular camera. An observability analysis was carried out to identify the physical configurations of camera and target object for which the relative pose cannot be recovered by measuring only the camera image plane coordinates of the object point features.
A popular extension to this is to concurrently estimate the target object model concurrently with the relative pose of the camera frame, a process known as Simultaneous Localization and Mapping (SLAM). The recursive framework was augmented to facilitate this larger estimation problem. The scale of the recovered solution is ambiguous using measurements from a single camera. A second observability analysis highlights more configurations for which the relative pose and target object model are unrecoverable from camera measurements alone. Instead, measurements which contain the global scale are required to obtain an accurate solution.
A set of additional sensors are detailed, including range finders and additional cameras. Measurement models for each are given, which facilitate the fusion of this supplemental data with the original monocular camera image measurements. A complete framework is then derived to combine a set of such sensor measurements to recover an accurate relative pose and target object model estimate.
This proposed framework is tested in a simulation environment with a virtual robot manipulator tracking a target object workpiece through a relative trajectory. All of the detailed estimation schemes are executed: the single monocular camera cases when the target object geometry are known and unknown, respectively; a two camera system in which the measurements are fused within the Kalman filter to recover the scale of the environment; a camera and point range sensor combination which provides a single range measurement at each system time step; and a laser pointer and camera hybrid which concurrently tries to measure the feature point images and a single range metric. The performance of the individual test cases are compared to determine which set of sensors is able to provide robust and reliable estimates for use in real world robotic applications.
Finally, some conclusions on the performance of the estimators are drawn and directions for future work are suggested. The camera and range finder combination is shown to accurately recover the proper scale for the estimate and warrants further investigation. Further, early results from the multiple monocular camera setup show superior performance to the other sensor combinations and interesting possibilities are available for wide field-of-view super sensors with high frame rates, built from many inexpensive devices.
|
50 |
Machine Vision on FPGA for Recognition of Road SignsHashemi, Ashkan January 2012 (has links)
This thesis is focused on developing a robust algorithm for recognition of road signs including all stages of a machine vision system i.e. image acquisition, pre-processing, colour segmentation, labelling and classifi-cation. Images are acquired by two different imaging systems and noise removal is done by applying Mean filter. Furthermore, different colour segmentation methods are investigated to find out the most high-performance approach and after applying dynamic segmentation based on blue channel in YCbCr colour space, the obtained binary image is transferred to a personal computer through the developed PC software using standard serial port and further processing and classification is run on the PC. Histogram of Oriented Gradients (HOG) is used as the main feature for recognition of road signs and finally the classification task is fulfilled by employing hardware efficient Minimum Distance Classifier (MDC).
|
Page generated in 0.1242 seconds