Spelling suggestions: "subject:"cachine vision."" "subject:"amachine vision.""
11 |
Improving the safety and efficiency of rail yard operations using roboticsBoddiford, Andrew Shropshire 10 March 2015 (has links)
Significant efforts have been expended by the railroad industry to make operations safer and more efficient through the intelligent use of sensor data. This work proposes to take the technology one step further to use this data for the control of physical systems designed to automate hazardous railroad operations, particularly those that require humans to interact with moving trains. To accomplish this, application specific requirements must be established to design self-contained machine vision and robotic solutions to eliminate the risks associated with existing manual operations. Present-day rail yard operations have been identified as good candidates to begin development. Manual uncoupling, in particular, of rolling stock in classification yards has been investigated. To automate this process, an intelligent robotic system must be able to detect, track, approach, contact, and manipulate constrained objects on equipment in motion. This work presents multiple prototypes capable of autonomously uncoupling full-scale freight cars using feedback from its surrounding environment. Geometric image processing algorithms and machine learning techniques were implemented to accurately identify cylindrical objects in point clouds generated in real-vi time. Unique methods fusing velocity and vision data were developed to synchronize a pair of moving rigid bodies in real-time. Multiple custom end-effectors with in-built compliance and fault tolerance were designed, fabricated, and tested for grasping and manipulating cylindrical objects. Finally, an event-driven robotic control application was developed to safely and reliably uncouple freight cars using data from 3D cameras, velocity sensors, force/torque transducers, and intelligent end-effector tooling. Experimental results in a lab setting confirm that modern robotic and sensing hardware can be used to reliably separate pairs of rolling stock up to two miles per hour. Additionally, subcomponents of the autonomous pin-pulling system (APPS) were designed to be modular to the point where they could be used to automate other hazardous, labor-intensive tasks found in U.S. classification yards. Overall, this work supports the deployment of autonomous robotic systems in semi-unstructured yard environments to increase the safety and efficiency of rail operations. / text
|
12 |
A COMPARATIVE STUDY OF MACHINE VISION CLASSIFICATION TECHNIQUES FOR THE DETECTION OF MISSING CLIPSMiles, Brandon 14 August 2009 (has links)
This thesis provides a comparative study of machine vision (MV) classification techniques for the detection of missing clips on an automotive part known as a cross car beam. This is a difficult application for an automated MV system because the inspection is conducted in an open manufacturing environment with variable lighting conditions.
A laboratory test cell was first used to investigate the effect of lighting. QVision, a software program originally developed at Queen’s University, was used to perform a representative inspection task. Solutions with different light sources and camera settings were investigated in order to determine the best possible set up to acquire an image of the part. Feature selection was applied to improve the results of this classification.
The MV system was then installed on an industrial assembly line. QVision was modified to detect the presence or absence of four clips and communicate this information to the computer controlling the manufacturing cell. Features were extracted from the image and then a neuro fuzzy (ANFIS) system was trained to perform the inspection. A performance goal of 0% False Positives and less than 2% False Negatives was achieved with the feature based ANFIS classifier. In addition, the problem of a rusty clip was examined and a radial hole algorithm was used to improve performance in this case. In this case, the system required hours to train.
Five new classifiers were then compared to the original feature based ANFIS classifier: 1) feature based with a Neural Network, 2) feature based with principle component analysis (PCA) applied and ANFIS, 3) feature based with PCA applied and a Neural Network, 4) Eigenimage based with ANFIS and 5) Eigenimage based with a Neural Network. The effect of adding a Hough rectangle feature and a principle component colour feature was also studied. It was found that the Neural Network classifier performed better than the ANFIS classifier. When PCA was applied the results improved still further. Overall, feature based classifiers had better performance than Eigenimage based classifiers. Finally, it should be noted that these six classifiers required only minutes to train. / Thesis (Master, Mechanical and Materials Engineering) -- Queen's University, 2009-08-07 17:03:10.422
|
13 |
FindFace : finding facial features by computerTock, David January 1992 (has links)
Recognising faces is a task taken for granted by most people, yet it probably represents one of the most complicated visual tasks we routinely perform. Progress in machine vision over recent years has been considerable, but has generally concentrated on areas inappropriate to face recognition. Faces are soft and round, lacking the clear edges and strong geometric properties usually required for machine vision. Instead, subtle changes in shading and texture indicate the transition from one <i>feature</i> to another. To compound the problem, faces are generally very similar, and the small differences that do exist are significant. We describe a machine vision system, called FindFace, that makes use of the underlying similarity of faces to locate specific features, such as the eyes and the mouth. Statistics gathered from 1000 faces are used both to predict the location of features, and evaluate locations generated by numerous independent feature locating routines, called <i>experts</i>. Once an initial location is determined, predictions about the positions of other features can be investigated. This can lead to a rapid increase in confidence as other features are identified in their predicted position, or alternativley to the initial location being quickly rejected. Individual experts can be simple, as a supervisory control system evaluates their performance using the face statistics, and can distinguish good results from bad. The control system can utilise multiple experts for individual features, selecting the most appropriate dynamically based on their previous success rate. The interface between experts and the control system is simple, making the addition of new experts easy. The combination of detailed statistics with many feature experts results in a system that is unhindered by failure to locate specific features, and that continues serching for features until the best solution is obtained with the experts available.
|
14 |
An investigation into the suitability of genetic programming for computing visibility areas for sensor planningGrant, Michael Sean January 2000 (has links)
No description available.
|
15 |
Self-learning systems and neural networks for image texture analysisZhang, Zhengwen January 1995 (has links)
No description available.
|
16 |
Spatiotemporal filtering with neural circuits for motion detection and trackingAtkins, Philip J. January 1996 (has links)
No description available.
|
17 |
High speed image processing for machine visionBowman, C. C. January 1986 (has links)
No description available.
|
18 |
Applications of sequence geometry to visual motionClarke, John Christopher January 1997 (has links)
No description available.
|
19 |
Hierarchical design approach to texture analysis by spatial grey level dependenceWood, Andrew John January 1994 (has links)
No description available.
|
20 |
A study of measured texture in images of natural scenes under varying illumination conditionsKhondkar, B. K. January 1995 (has links)
No description available.
|
Page generated in 0.0763 seconds