Spelling suggestions: "subject:"cachine VIsion"" "subject:"amachine VIsion""
81 |
Automated freeform assembly of threaded fastenersDharmaraj, Karthick January 2015 (has links)
Over the past two decades, a major part of the manufacturing and assembly market has been driven by its customer requirements. Increasing customer demand for personalised products create the demand for smaller batch sizes, shorter production times, lower costs, and the flexibility to produce families of products - or different parts - with the same sets of equipment. Consequently, manufacturing companies have deployed various automation systems and production strategies to improve their resource efficiency and move towards right-first-time production. However, many of these automated systems, which are involved with robot-based, repeatable assembly automation, require component- specific fixtures for accurate positioning and extensive robot programming, to achieve flexibility in their production. Threaded fastening operations are widely used in assembly. In high-volume production, the fastening processes are commonly automated using jigs, fixtures, and semi-automated tools. This form of automation delivers reliable assembly results at the expense of flexibility and requires component variability to be adequately controlled. On the other hand, in low- volume, high- value manufacturing, fastening processes are typically carried out manually by skilled workers. This research is aimed at addressing the aforementioned issues by developing a freeform automated threaded fastener assembly system that uses 3D visual guidance. The proof-of-concept system developed focuses on picking up fasteners from clutter, identifying a hole feature in an imprecisely positioned target component and carry out torque-controlled fastening. This approach has achieved flexibility and adaptability without the use of dedicated fixtures and robot programming. This research also investigates and evaluates different 3D imaging technology to identify the suitable technology required for fastener assembly in a non-structured industrial environment. The proposed solution utilises the commercially available technologies to enhance the precision and speed of identification of components for assembly processes, thereby improving and validating the possibility of reliably implementing this solution for industrial applications. As a part of this research, a number of novel algorithms are developed to robustly identify assembly components located in a random environment by enhancing the existing methods and technologies within the domain of the fastening processes. A bolt identification algorithm was developed to identify bolts located in a random clutter by enhancing the existing surface-based matching algorithm. A novel hole feature identification algorithm was developed to detect threaded holes and identify its size and location in 3D. The developed bolt and feature identification algorithms are robust and has sub-millimetre accuracy required to perform successful fastener assembly in industrial conditions. In addition, the processing time required for these identification algorithms - to identify and localise bolts and hole features - is less than a second, thereby increasing the speed of fastener assembly.
|
82 |
Head and Shoulder Detection using CNN and RGBD DataEl Ahmar, Wassim 18 July 2019 (has links)
Alex Krizhevsky and his colleagues changed the world of machine vision and image
processing in 2012 when their deep learning model, named Alexnet, won the Im-
ageNet Large Scale Visual Recognition Challenge with more than 10.8% lower error
rate than their closest competitor. Ever since, deep learning approaches have been
an area of extensive research for the tasks of object detection, classification, pose esti-
mation, etc...This thesis presents a comprehensive analysis of different deep learning
models and architectures that have delivered state of the art performances in various
machine vision tasks. These models are compared to each other and their strengths
and weaknesses are highlighted.
We introduce a new approach for human head and shoulder detection from RGB-
D data based on a combination of image processing and deep learning approaches.
Candidate head-top locations(CHL) are generated from a fast and accurate image
processing algorithm that operates on depth data. We propose enhancements to the
CHL algorithm making it three times faster. Different deep learning models are then
evaluated for the tasks of classification and detection on the candidate head-top loca-
tions to regress the head bounding boxes and detect shoulder keypoints. We propose
3 different small models based on convolutional neural networks for this problem.
Experimental results for different architectures of our model are highlighted. We
also compare the performance of our model to mobilenet.
Finally, we show the differences between using 3 types of inputs CNN models:
RGB images, a 3-channel representation generated from depth data (Depth map,
Multi-order depth template, and Height difference map or DMH), and a 4 channel
input composed of RGB+D data.
|
83 |
Sistema óptico baseado em visão computacional para obtenção de níveis de turbulência na superfície de escoamentos livres com aplicação na determinação de parâmetros relacionados com a reoxigenação do meio líquido / Optical system based on machine vision for measurement of surface turbulence level in open flow with application on determination of parameters related to reaeration of the liquid phaseSzéliga, Marcos Rogério 12 September 2003 (has links)
O sistema óptico baseado em Visão Computacional, consiste em dispositivos de geração, aquisição e processamento das imagens da incidência de um feixe laser sobre a superfície de um escoamento e reflexão sobre uma tela horizontal. Com função de medição da turbulência na superfície do fluxo, os dispositivos de geração e aquisição de imagens foram condicionados sobre um tanque de produção hidrodinâmica de turbulência, acionado por grades oscilantes. Um software, com interface gráfica, foi desenvolvido para processamento das imagens e obtenção de dados geométricos do escoamento. Com até 30 quadros por segundo é possível visualizar a oscilação turbulenta e também as superfícies 3D, equivalentes ao escoamento real, geradas numa malha de diferenças finitas. Obtêm-se velocidades verticais, ampliações superficiais e velocidades angulares, entre outros parâmetros, em diversas situações de turbulência. No mesmo tanque foram procedidas, previamente, medidas de concentração de oxigênio dissolvido segundo uma técnica que permite determinar o coeficiente de reaeração K2. Em modelo gráfico foram reunidos dados de turbulência e coeficientes K2 de forma a possibilitar a previsão desse coeficiente em escoamentos naturais, com aplicação na estimativa da capacidade de autodepuração nos corpos d\'água receptores de efluentes, que sofrem rebaixamento do nível de oxigênio. / The optical system based on machine vision consists on generation, acquisition and processing devices of the images of a laser beam incidence on a flow surface and reflection on a horizontal screen. To measure the turbulence in the surface flow, the generation and acquisition of images devices were conditioned on a tank of hydrodynamic turbulence production by oscillating grids. Software, with graphic interface, was developed for processing the images and obtaining geometric data of the flow. With up to 30 pictures per second it is possible to visualize the turbulent oscillation and also the 3D surfaces, equivalent to the real flow, generated in a mesh of finite differences. Vertical velocities, surface enlargements and angular velocities are obtained, among other parameters, in several turbulence situations. In the same tank were proceeded, previously, measures of concentration of dissolved oxygen according to a technique that allows to determine the reaeration coefficient K2. Turbulence data and coefficients K2 were gathered in a graphic model to make possible the forecast of this coefficient in natural flows, with application on the estimation of the natural depuration capability in the receiving water bodies of inflows, that suffer lowering of the oxygen level.
|
84 |
UTVECKLING AV ETT VISIONSTYRT SYSTEM FÖR PLOCKNING AV OSORTERADE DETALJER : En tillämpning av bin-picking i plaströrsproduktion / Development of a vision controlled system for picking unorganized products : An application of bin picking in plastic pipe productionPersson, Casper, Åstrand, Ludvig January 2019 (has links)
This bachelor’s thesis has been carried through at the company Mabema AB in Linköping which offers complete vision based systems for multiple applications. With camera technology and advanced image processing, the company is working mainly in four different business areas; RobotVision, Vision, Nuclear and Wood. Mabema AB has been assigned to develop a vision system for robot guidance for the company Pipelife Sverige AB which is a big supplier of plastic pipes. The vision system is supposed to identify plastic pipes which are transported by a conveyor belt in random order. The pipes are then to be picked by two robots and placed in fixtures for further processing. Through studies of existing similar systems and analysis of suitable hardware, a system that satisfies the customer’s needs was made and alternative systems was presented. The result of the thesis ended with vision controlled system built of two robots and a 3D-scanner that accomplishes the assigned task with high robustness and an analysis of alternative systems was presented.
|
85 |
An Information Theoretic Hierarchical Classifier for Machine VisionAndrews, Michael J. 11 May 1999 (has links)
A fundamental problem in machine vision is the classifcation of objects which may have unknown position, orientation, or a combination of these and other transformations. The massive amount of data required to accurately form an appearance-based model of an object under all values of shift and rotation transformations has discouraged the incorporation of the combination of both transformations into a single model representation. This Master's Thesis documents the theory and implementation of a hierarchical classifier, named the Information Theoretic Decision Tree system, which has the demonstrated ability to form appearance-based models of objects which are shift and rotation invariant which can be searched with a great reduction in evaluations over a linear sequential search. Information theory is utilized to obtain a measure of information gain in a feature space recursive segmentation algorithm which positions hyperplanes to local information gain maxima. This is accomplished dynamically through a process of local optimization based on a conjugate gradient technique enveloped by a simulated annealing optimization loop. Several target model training strategies have been developed for shift and rotation invariance, notably the method of exemplar grouping, in which any combination of rotation and translation transformations of target object views can be simulated and folded into the appearance-based model. The decision tree structure target models produced as a result of this process effciently represent the voluminous training data, according rapid test-time classification of objects.
|
86 |
DESIGN OF A MACHINE VISION CAMERA FOR SPATIAL AUGMENTED REALITYRuffner, Matt Phillip 01 January 2018 (has links)
Structured Light Imaging (SLI) is a means of digital reconstruction, or Three-Dimensional (3D) scanning, and has uses that span many disciplines. A projector, camera and Personal Computer (PC) are required to perform such 3D scans. Slight variances in synchronization between these three devices can cause malfunctions in the process due to the limitations of PC graphics processors as real-time systems. Previous work used a Field Programmable Gate Array (FPGA) to both drive the projector and trigger the camera, eliminating these timing issues, but still needing an external camera. This thesis proposes the incorporation of the camera with the FPGA SLI controller by means of a custom printed circuit board (PCB) design. Featuring a high speed image sensor as well as High Definition Multimedia Interface (HDMI) input and output, this PCB enables the FPGA to perform SLI scans as well as pass through HDMI video to the projector for Spatial Augmented Reality (SAR) purposes. Minimizing ripple noise on the power supply by means of effective circuit design and PCB layout, realizes a compact and cost effective machine vision sensing solution.
|
87 |
Picking Parts out of a BinHorn, Berthold K.P., Ikeuchi, Katsushi 01 October 1983 (has links)
One of the remaining obstacles to the widespread application of industrial robots is their inability to deal with parts that are not precisely positioned. In the case of manual assembly, components are often presented in bins. Current automated systems, on the other hand, require separate feeders which present the parts with carefully controlled position and attitude. Here we show how results in machine vision provide techniques for automatically directing a mechanical manipulator to pick one object at a time out of a pile. The attitude of the object to be picked up is determined using a histogram of the orientations of visible surface patches. Surface orientation, in turn, is determined using photometric stereo applied to multiple images. These images are taken with the same camera but differing lighting. The resulting needle map, giving the orientations of surface patches, is used to create an orientation histogram which is a discrete approximation to the extended Gaussian image. This can be matched against a synthetic orientation histogram obtained from prototypical models of the objects to be manipulated. Such models may be obtained from computer aided design (CAD) databases. The method thus requires that the shape of the objects be described, but it is not restricted to particular types of objects.
|
88 |
Finding Junctions Using the Image GradientBeymer, David J. 01 December 1991 (has links)
Junctions are the intersection points of three or more intensity surfaces in an image. An analysis of zero crossings and the gradient near junctions demonstrates that gradient-based edge detection schemes fragment edges at junctions. This fragmentation is caused by the intrinsic pairing of zero crossings and a destructive interference of edge gradients at junctions. Using the previous gradient analysis, we propose a junction detector that finds junctions in edge maps by following gradient ridges and using the minimum direction of saddle points in the gradient. The junction detector is demonstrated on real imagery and previous approaches to junction detection are discussed.
|
89 |
An Analog VLSI Chip for Estimating the Focus of ExpansionMcQuirk, Ignacio Sean 21 August 1996 (has links)
For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated.
|
90 |
Image Chunking: Defining Spatial Building Blocks for Scene AnalysisMahoney, James V. 01 August 1987 (has links)
Rapid judgments about the properties and spatial relations of objects are the crux of visually guided interaction with the world. Vision begins, however, with essentially pointwise representations of the scene, such as arrays of pixels or small edge fragments. For adequate time-performance in recognition, manipulation, navigation, and reasoning, the processes that extract meaningful entities from the pointwise representations must exploit parallelism. This report develops a framework for the fast extraction of scene entities, based on a simple, local model of parallel computation.sAn image chunk is a subset of an image that can act as a unit in the course of spatial analysis. A parallel preprocessing stage constructs a variety of simple chunks uniformly over the visual array. On the basis of these chunks, subsequent serial processes locate relevant scene components and assemble detailed descriptions of them rapidly. This thesis defines image chunks that facilitate the most potentially time-consuming operations of spatial analysis---boundary tracing, area coloring, and the selection of locations at which to apply detailed analysis. Fast parallel processes for computing these chunks from images, and chunk-based formulations of indexing, tracing, and coloring, are presented. These processes have been simulated and evaluated on the lisp machine and the connection machine.
|
Page generated in 0.0555 seconds