• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 21
  • 13
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 211
  • 211
  • 56
  • 44
  • 44
  • 36
  • 27
  • 27
  • 25
  • 24
  • 21
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Analysis of non-steady state physiological and pathological processes

Hill, Nathan R. January 2008 (has links)
The analysis of non steady state physiological and pathological processes concerns the abstraction, extraction, formalisation and analysis of information from physiological systems that is obscured, hidden or unable to be assessed using traditional methods. Time Series Analysis (TSA) techniques were developed and built into a software program, Easy TSA, with the aim of examining the oscillations of hormonal concentrations in respect to their temporal aspects – periodicity, phase, pulsatility. The Easy TSA program was validated using constructed data sets and used in a clinical study to examine the relationship between insulin and obesity in people without diabetes. In this study fifty-six non-diabetic subjects (28M, 28F) were examined using data from a number of protocols. Fourier Transform and Autocorrelation techniques determined that there was a critical effect of the level of BMI on the frequency, amplitude and regularity of insulin oscillations. Second, information systems formed the background to the development of an algorithm to examine glycaemic variability and a new methodology termed the Glycaemic Risk in Diabetes Equation (GRADE) was developed. The aim was to report an integrated glycaemic risk score from glucose profiles that would complement summary measures of glycaemia, such as the HbA1c. GRADE was applied retrospectively to blood glucose data sets to determine if it was clinically relevant. Subjects with type 1 and type 2 diabetes had higher GRADE scores than the non-diabetic population and the contribution of hypo- and hyperglycaemic episodes to risk was demonstrated. A prospective study was then designed with the aim to apply GRADE in a clinical context and to measure the statistical reproducibility of using GRADE. Fifty-three (Male 26, Female 27) subjects measured their blood glucose 4 times daily for twenty-one days. The results were that lower HbA1c’s correlated with an increased risk of hypoglycaemia and higher HbA1c’s correlated with an increased risk of hyperglycaemia. Some subjects had HbA1c of 7.0 but had median GRADE values ranging from 2.2 to 10.5. The GRADE score summarized diverse glycaemic profiles into a single assessment of risk. Well-controlled glucose profiles yielded GRADE scores <= 5 and higher GRADE scores represented increased clinical risk from hypo or hyperglycaemia. Third, an information system was developed to analyse data-rich multi-variable retinal images using the concept of assessment of change rather than specific lesion recognition. A fully Automated Retinal Image Differencing (ARID) computer system was developed to highlight change between retinal images over time. ARID was validated using a study and then a retrospective study sought to determine if the use of the ARID software was an aid to the retinal screener. One hundred and sixty images (80 image pairs) were obtained from Gloucestershire Diabetic Eye Screening Programme. Images pairs were graded manually and categorised according to how each type of lesion had progressed, regressed, or not changed between image A and image B. After a 30 day washout period image pairs were graded using ARID and the results compared. The comparison of manual grading to grading using ARID (Table 4.3) demonstrated an increased sensitivity and specificity. The mean sensitivity of ARID (87.9%) was increased significantly in comparison to manually grading sensitivity (84.1%) (p<0.05). The specificity of the automated analysis (87.5%) increased significantly from the specificity (56.3%) achieved by manually grading (p<0.05). The conclusion was that automatic display of an ARID differenced image where sequential photographs are available would allow rapid assessment and appropriate triage. Forth, non-linear dynamic systems analysis methods were utilised to build a system to assess the extent of chaos characteristics within the insulin-glucose feedback domain. Biological systems exist that are deterministic yet are neither predictable nor repeatable. Instead they exhibit chaos, where a small change in the initial conditions produces a wholly different outcome. The glucose regulatory system is a dynamic system that maintains glucose homeostasis through the feedback mechanism of glucose, insulin, and contributory hormones and was ideally suited to chaos analysis. To investigate this system a new algorithm was created to assess the Normalised Area of Attraction (NAA). The NAA was calculated by defining an oval using the 95% CI of glucose & Insulin (the limit cycle) on a phasic plot. Thirty non-diabetic subjects and four subjects with type 2 diabetes were analysed. The NAA indicated a smaller range for glucose and insulin excursions with the non-diabetics subjects (p<0.05). The conclusion was that the evaluation of glucose metabolism in terms of homeostatic integrity and not in term of cut-off values may enable a more realistic approach to the effective treatment and prevention of diabetes and its complications.
192

Human layout estimation using structured output learning

Mittal, Arpit January 2012 (has links)
In this thesis, we investigate the problem of human layout estimation in unconstrained still images. This involves predicting the spatial configuration of body parts. We start our investigation with pictorial structure models and propose an efficient method of model fitting using skin regions. To detect the skin, we learn a colour model locally from the image by detecting the facial region. The resulting skin detections are also used for hand localisation. Our next contribution is a comprehensive dataset of 2D hand images. We collected this dataset from publicly available image sources, and annotated images with hand bounding boxes. The bounding boxes are not axis aligned, but are rather oriented with respect to the wrist. Our dataset is quite exhaustive as it includes images of different hand shapes and layout configurations. Using our dataset, we train a hand detector that is robust to background clutter and lighting variations. Our hand detector is implemented as a two-stage system. The first stage involves proposing hand hypotheses using complementary image features, which are then evaluated by the second stage classifier. This improves both precision and recall and results in a state-of-the-art hand detection method. In addition we develop a new method of non-maximum suppression based on super-pixels. We also contribute an efficient training algorithm for structured output ranking. In our algorithm, we reduce the time complexity of an expensive training component from quadratic to linear. This algorithm has a broad applicability and we use it for solving human layout estimation and taxonomic multiclass classification problems. For human layout, we use different body part detectors to propose part candidates. These candidates are then combined and scored using our ranking algorithm. By applying this bottom-up approach, we achieve accurate human layout estimation despite variations in viewpoint and layout configuration. In the multiclass classification problem, we define the misclassification error using a class taxonomy. The problem then reduces to a structured output ranking problem and we use our ranking method to optimise it. This allows inclusion of semantic knowledge about the classes and results in a more meaningful classification system. Lastly, we substantiate our ranking algorithm with theoretical proofs and derive the generalisation bounds for it. These bounds prove that the training error reduces to the lowest possible error asymptotically.
193

The acquisition of coarse gaze estimates in visual surveillance

Benfold, Ben January 2011 (has links)
This thesis describes the development of methods for automatically obtaining coarse gaze direction estimates for pedestrians in surveillance video. Gaze direction estimates are beneficial in the context of surveillance as an indicator of an individual's intentions and their interest in their surroundings and other people. The overall task is broken down into two problems. The first is that of tracking large numbers of pedestrians in low resolution video, which is required to identify the head regions within video frames. The second problem is to process the extracted head regions and estimate the direction in which the person is facing as a coarse estimate of their gaze direction. The first approach for head tracking combines image measurements from HOG head detections and KLT corner tracking using a Kalman filter, and can track the heads of many pedestrians simultaneously to output head regions with pixel-level accuracy. The second approach uses Markov-Chain Monte-Carlo Data Association (MCMCDA) within a temporal sliding window to provide similarly accurate head regions, but with improved speed and robustness. The improved system accurately tracks the heads of twenty pedestrians in 1920x1080 video in real-time and can track through total occlusions for short time periods. The approaches for gaze direction estimation all make use of randomised decision tree classifiers. The first develops classifiers for low resolution head images that are invariant to hair and skin colours using branch decisions based on abstract labels rather than direct image measurements. The second approach addresses higher resolution images using HOG descriptors and novel Colour Triplet Comparison (CTC) based branches. The final approach infers custom appearance models for individual scenes using weakly supervised learning over large datasets of approximately 500,000 images. A Conditional Random Field (CRF) models interactions between appearance information and walking directions to estimate gaze directions for head image sequences.
194

Dense Stereo Reconstruction in a Field Programmable Gate Array

Sabihuddin, Siraj 30 July 2008 (has links)
Estimation of depth within an imaged scene can be formulated as a stereo correspondence problem. Software solutions tend to be too slow for high frame rate (i.e. > 30 fps) performance. Hardware solutions can result in marked improvements. This thesis explores one such hardware implementation that generates dense binocular disparity estimates at frame rates of over 200 fps using a dynamic programming formulation (DPML) developed by Cox et. al. A highly parameterizable field programmable gate array implementation of this architecture demonstrates equivalent accuracy while executing at significantly higher frame rates to those of current approaches. Existing hardware implementations for dense disparity estimation often use sum of squared difference, sum of absolute difference or other similar algorithms that typically perform poorly in comparison to DPML. The presented system runs at 248 fps for a resolution of 320 x 240 pixels and disparity range of 128 pixels, a performance of 2.477 billion DPS.
195

Dense Stereo Reconstruction in a Field Programmable Gate Array

Sabihuddin, Siraj 30 July 2008 (has links)
Estimation of depth within an imaged scene can be formulated as a stereo correspondence problem. Software solutions tend to be too slow for high frame rate (i.e. > 30 fps) performance. Hardware solutions can result in marked improvements. This thesis explores one such hardware implementation that generates dense binocular disparity estimates at frame rates of over 200 fps using a dynamic programming formulation (DPML) developed by Cox et. al. A highly parameterizable field programmable gate array implementation of this architecture demonstrates equivalent accuracy while executing at significantly higher frame rates to those of current approaches. Existing hardware implementations for dense disparity estimation often use sum of squared difference, sum of absolute difference or other similar algorithms that typically perform poorly in comparison to DPML. The presented system runs at 248 fps for a resolution of 320 x 240 pixels and disparity range of 128 pixels, a performance of 2.477 billion DPS.
196

Έλεγχος ρομπότ για το διαχωρισμό υφάσματος από στοίβα και τη μεταφορά του σε επόμενο στάδιο επεξεργασίας, βασιζόμενος σε μεθόδους τεχνητής νοημοσύνης

Ζουμπόνος, Γεώργιος 14 February 2012 (has links)
Η βιομηχανία της ένδυσης εξακολουθεί να στηρίζεται σε πολύ μεγάλο βαθμό στην χειρωνακτική εργασία. Αυτό οφείλεται στο γεγονός ότι τα υφάσματα είναι σώματα που παρουσιάζουν πολύ μικρή δυσκαμψία με αποτέλεσμα να παραμορφώνονται εύκολα, ενώ παράλληλα έχουν ένα μεγάλο εύρος δομών και ιδιοτήτων που καθιστά δύσκολη την ανάπτυξη αξιόπιστων και ευέλικτων συστημάτων χειρισμού. Στη διατριβή αυτή παρουσιάζεται μία μέθοδος για τον διαχωρισμό και σύλληψη ενός τεμαχίου υφάσματος από στοίβα, βασισμένη στη ροή αέρα υπό πίεση πάνω από τη στοίβα. Η ροή ανασηκώνει το άνω τεμάχιο, ενώ η τυρβώδης φύση της ροής διαχωρίζει το τεμάχιο από τα υποκείμενά του. Αναπτύσσονται δύο συστήματα για τον αυτόνομο προσδιορισμό της τροχιάς άκρου εργασίας ρομπότ, για την πραγματοποίηση του χειρισμού της απλής απόθεσης τεμαχίου υφάσματος σε τράπεζα εργασίας. Αυτά τα συστήματα βασίζονται σε μεθόδους υπολογιστικής νοημοσύνης, και πιο συγκεκριμένα στην ασαφή λογική, χωρίς να απαιτούν τη χρήση επιπρόσθετων συσκευών ή τη γνώση πολλών μηχανικών ιδιοτήτων των υφασμάτων. Μελετάται ο χειρισμός του διπλώματος υφάσματος σε τράπεζα εργασίας και εισάγονται τρία στάδια στα οποία μπορεί να χωριστεί αυτός ο χειρισμός ώστε να μειωθεί η πολυπλοκότητα του συνολικού χειρισμού. Αναλύεται το κάθε στάδιο και παρουσιάζονται τα χαρακτηριστικά μορφής του υφάσματος που επιλέγονται για να περιγράψουν την κατάστασή του για κάθε στάδιο του χειρισμού. Εισάγεται μια μέθοδος για την εξαγωγή αυτών των χαρακτηριστικών με τη χρήση δύο αισθητήρων όρασης, η οποία βασίζεται στην αναζήτηση των χαρακτηριστικών αυτών σε συγκεκριμένες περιοχές του χώρου της εικόνας. Αυτό καθίσταται δυνατό χάρη στην βαθμονόμηση των αισθητήρων. Αναπτύσσεται μία στρατηγική για το δίπλωμα υφασμάτων βασισμένη σε ασαφή λογική με ανάδραση όρασης. Ο ασαφής ελεγκτής, πολλών εισόδων-εξόδων, εκπαιδεύεται με τη μέθοδο δοκιμής-και-σφάλματος και παρέχει τα κέρδη ενός Ρ-ελεγκτή. Το σύστημα παρουσιάζει ευελιξία και αξιοπιστία για υφάσματα που ικανοποιούν τους περιορισμούς που έχουν τεθεί. Παρουσιάζεται μία στρατηγική για τον έλεγχο του ενεργού διπλώματος όπου δύο ανεξάρτητα υποσυστήματα αναλαμβάνουν τον προσδιορισμό της κατάστασης στόχου του υφάσματος και την επίτευξη αυτού του στόχου αυξάνοντας με αυτόν τον τρόπο την ευελιξία του συστήματος. Οι μέθοδοι που αναπτύχθηκαν μπορούν να χρησιμοποιηθούν ως αφετηρία για την εισαγωγή αξιόπιστων και ευέλικτων αυτοματισμών με σκοπό την εκτέλεση των χειρισμών της βιομηχανίας ένδυσης από ρομπότ. / The apparel industry is still mainly based on manual labor. The main reason for the automation delay is the fact that fabrics are bodies that present very low bending rigidity, and as a result they are easily deformed. Fabrics also present a great variety of structures and properties. These facts deter the development of reliable and flexible robotic handling systems. In this thesis a method for the separation and capture of a piece of fabric from a stack is presented, based on air flow over the stack. The difference in static pressure, caused by the flow, lifts the upper piece of the fabric, while the turbulent nature of the flow separates it from its underlying pieces. Two systems are developed for the determination of the trajectory of the end-effector of a robot, for the realization of the simple laying task of a piece of fabric on a work table. These systems are based on soft computing, and particularly on fuzzy logic, and any additional apparatuses or the knowledge of many mechanical properties of the fabrics are not required. The task of folding a piece of fabric on a work table is investigated and three stages are introduced, in which the folding task can be decomposed in order to reduce the complexity of the robot controller development. Each stage is explained and the shape characteristics that are selected in order to describe the shape of the fabric for each stage are presented. A method for the extraction of the selected characteristics from two vision sensors is introduced, which is based on variable image segmentation. The calibration of the vision sensors is also presented. A strategy is developed for the folding of rectangular pieces of fabric based on fuzzy logic with vision feedback. The indirect fuzzy controller is trained via trial-and-error and provides the variable gains of a P-controller. The system presents flexibility and reliability for the fabrics that satisfy the restrictions that have been set. Finally, a strategy for the control of the true folding stage is presented, according to which two separate subsystems determine the target state of the fabric and lead the fabric towards that state, increasing thus the flexibility of the system. The methods that are developed in this thesis can be the stepping stone for the introduction of reliable and flexible automation schemes for the realization of some of the apparel industry tasks that are still labor intensive.
197

Robot Goalkeeper : A robotic goalkeeper based on machine vision and motor control

Adeboye, Taiyelolu January 2018 (has links)
This report shows a robust and efficient implementation of a speed-optimized algorithm for object recognition, 3D real world location and tracking in real time. It details a design that was focused on detecting and following objects in flight as applied to a football in motion. An overall goal of the design was to develop a system capable of recognizing an object and its present and near future location while also actuating a robotic arm in response to the motion of the ball in flight. The implementation made use of image processing functions in C++, NVIDIA Jetson TX1, Sterolabs’ ZED stereoscopic camera setup in connection to an embedded system controller for the robot arm. The image processing was done with a textured background and the 3D location coordinates were applied to the correction of a Kalman filter model that was used for estimating and predicting the ball location. A capture and processing speed of 59.4 frames per second was obtained with good accuracy in depth detection while the ball was well tracked in the tests carried out.
198

REGTEST - an Automatic &amp; Adaptive GUI Regression Testing Tool.

Forsgren, Robert, Petersson Vasquez, Erik January 2018 (has links)
Software testing is something that is very common and is done to increase the quality of and confidence in a software. In this report, an idea is proposed to create a software for GUI regression testing which uses image recognition to perform steps from test cases. The problem that exists with such a solution is that if a GUI has had changes made to it, then many test cases might break. For this reason, REGTEST was created which is a GUI regression testing tool that is able to handle one type of change that has been made to the GUI component, such as a change in color, shape, location or text. This type of solution is interesting because setting up tests with such a tool can be very fast and easy, but one previously big drawback of using image recognition for GUI testing is that it has not been able to handle changes well. It can be compared to tools that use IDs to perform a test where the actual visualization of a GUI component does not matter; It only matters that the ID stays the same; however, when using such tools, it either requires underlying knowledge of the GUI component naming conventions or the use of tools which automatically constructs XPath queries for the components. To verify that REGTEST can work as well as existing tools a comparison was made against two professional tools called Ranorex and Kantu. In those tests, REGTEST proved very successful and performed close to, or better than the other software.
199

Methodology of surface defect detection using machine vision with magnetic particle inspection on tubular material / Méthodologie de détection des défauts de surface par vision artificielle avec magnetic particle inspection sur le matériel tubulaire

Mahendra, Adhiguna 08 November 2012 (has links)
[...]L’inspection des surfaces considérées est basée sur la technique d’Inspection par Particules Magnétiques (Magnetic Particle Inspection (MPI)) qui révèle les défauts de surfaces après les traitements suivants : la surface est enduite d’une solution contenant les particules, puis magnétisées et soumise à un éclairage Ultra-Violet. La technique de contrôle non destructif MPI est une méthode bien connue qui permet de révéler la présence de fissures en surface d’un matériau métallique. Cependant, une fois le défaut révélé par le procédé, ladétection automatique sans intervention de l’opérateur en toujours problématique et à ce jour l'inspection basée sur le procédé MPI des matériaux tubulaires sur les sites de production deVallourec est toujours effectuée sur le jugement d’un opérateur humain. Dans cette thèse, nous proposons une approche par vision artificielle pour détecter automatiquement les défauts à partir des images de la surface de tubes après traitement MPI. Nous avons développé étape par étape une méthodologie de vision artificielle de l'acquisition d'images à la classification.[...] La première étape est la mise au point d’un prototype d'acquisition d’images de la surface des tubes. Une série d’images a tout d’abord été stockée afin de produire une base de données. La version actuelle du logiciel permet soit d’enrichir la base de donnée soit d’effectuer le traitement direct d’une nouvelle image : segmentation et saisie de la géométrie (caractéristiques de courbure) des défauts. Mis à part les caractéristiques géométriques et d’intensité, une analyse multi résolution a été réalisée sur les images pour extraire des caractéristiques texturales. Enfin la classification est effectuée selon deux classes : défauts et de non-défauts. Celle ci est réalisée avec le classificateur des forêts aléatoires (Random Forest) dont les résultats sontcomparés avec les méthodes Support Vector Machine et les arbres de décision.La principale contribution de cette thèse est l'optimisation des paramètres utilisées dans les étapes de segmentations dont ceux des filtres de morphologie mathématique, du filtrage linéaire utilisé et de la classification avec la méthode robuste des plans d’expériences (Taguchi), très utilisée dans le secteur de la fabrication. Cette étape d’optimisation a été complétée par les algorithmes génétiques. Cette méthodologie d’optimisation des paramètres des algorithmes a permis un gain de temps et d’efficacité significatif. La seconde contribution concerne la méthode d’extraction et de sélection des caractéristiques des défauts. Au cours de cette thèse, nous avons travaillé sur deux bases de données d’images correspondant à deux types de tubes : « Tool Joints » et « Tubes Coupling ». Dans chaque cas un tiers des images est utilisé pour l’apprentissage. Nous concluons que le classifieur du type« Random Forest » combiné avec les caractéristiques géométriques et les caractéristiques detexture extraites à partir d’une décomposition en ondelettes donne le meilleur taux declassification pour les défauts sur des pièces de « Tool Joints »(95,5%) (Figure 1). Dans le cas des « coupling tubes », le meilleur taux de classification a été obtenu par les SVM avec l’analyse multirésolution (89.2%) (figure.2) mais l’approche Random Forest donne un bon compromis à 82.4%. En conclusion la principale contrainte industrielle d’obtenir un taux de détection de défaut de 100% est ici approchée mais avec un taux de l’ordre de 90%. Les taux de mauvaises détections (Faux positifs ou Faux Négatifs) peuvent être améliorés, leur origine étant dans l’aspect de l’usinage du tube dans certaines parties, « Hard Bending ».De plus, la méthodologie développée peut être appliquée à l’inspection, par MPI ou non, de différentes lignes de produits métalliques / Industrial surface inspection of tubular material based on Magnetic Particle Inspection (MPI) is a challenging task. Magnetic Particle Inspection is a well known method for Non Destructive Testing with the goal to detect the presence of crack in the tubular surface. Currently Magnetic Particle Inspection for tubular material in Vallourec production site is stillbased on the human inspector judgment. It is time consuming and tedious job. In addition, itis prone to error due to human eye fatigue. In this thesis we propose a machine vision approach in order to detect the defect in the tubular surface MPI images automatically without human supervision with the best detection rate. We focused on crack like defects since they represent the major ones. In order to fulfill the objective, a methodology of machine vision techniques is developed step by step from image acquisition to defect classification. The proposed framework was developed according to industrial constraint and standard hence accuracy, computational speed and simplicity were very important. Based on Magnetic Particle Inspection principles, an acquisition system is developed and optimized, in order to acquire tubular material images for storage or processing. The characteristics of the crack-like defects with respect to its geometric model and curvature characteristics are used as priory knowledge for mathematical morphology and linear filtering. After the segmentation and binarization of the image, vast amount of defect candidates exist. Aside from geometrical and intensity features, Multi resolution Analysis wasperformed on the images to extract textural features. Finally classification is performed with Random Forest classifier due to its robustness and speed and compared with other classifiers such as with Support Vector Machine Classifier. The parameters for mathematical morphology, linear filtering and classification are analyzed and optimized with Design Of Experiments based on Taguchi approach and Genetic Algorithm. The most significant parameters obtained may be analyzed and tuned further. Experiments are performed ontubular materials and evaluated by its accuracy and robustness by comparing ground truth and processed images. This methodology can be replicated for different surface inspection application especially related with surface crack detection
200

Desenvolvimento de um sistema de visão de máquina para inspeção de conformidade em um produto industrial /

Poleto, Arthur Suzini. January 2019 (has links)
Orientador: João Antonio Pereira / Resumo: Visão de máquina é um campo multidisciplinar que vem crescendo na indústria, que está cada vez mais preocupada em reduzir custos, automatizar processos, e atender requisitos de qualidade do produto para atender seus clientes. Processos de montagem realizados de forma manual com inspeção e controle visual são tipicamente processos susceptíveis a erros, à utilização de peças não conformes na montagem do produto final. Este trabalho apresenta uma proposta de desenvolvimento de um sistema de visão de máquina com base no processamento e análise de imagens digitais para a inspeção das características e especificações das peças e componentes utilizados na montagem de capotas marítimas, objetivando verificar e garantir a conformidade do produto final. A inspeção e avaliação da conformidade do produto são feitas por etapas com a utilização de duas câmeras, uma captura a imagem do código de identificação alfanumérico do produto e a outra inspeciona o conjunto de elementos de fixação. As imagens passam por um processo de tratamento que envolve a filtragem espacial utilizando máscara de médias para suavização, alargamento de contraste para expandir a faixa de intensidades e segmentação para formação dos objetos de interesse. Uma função de OCR é utilizada para a extração de caracteres e reconhecimento do código do produto e a extração de características específicas do conjunto de componentes de fixação é feita por descritores de forma representados pelos invariantes de momento. As caracte... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Machine vision is a growing multidisciplinary field in the industry that is increasingly concerned with reducing costs, automating processes, and meeting product quality requirements to serve its customers. Manual assembly processes with inspection and visual control are typically error-prone processes using non-conforming parts in the final product assembly. This work presents a proposal for the development of a machine vision system based on digital image processing and analysis for the inspection of the characteristics and specifications of the parts and components used in the assembly of marine bonnets, aiming to verify and ensure the conformity of the final product. Inspection and conformity assessment of the product are done in stages using two cameras, one capturing the image of the alphanumeric identification code of the product and the other inspecting the set of fasteners. The images undergo a treatment process that involves spatial filtering using averaging masks for smoothing, contrast widening to expand the range of intensities, and segmentation to form the objects of interest. An OCR function is used for character extraction and product code recognition, and the extraction of specific features of the fastener assembly is done by shape descriptors represented by the moment invariants. The specific characteristics of the fasteners are used to assess the conformity of the product with its respective code. The presentation of data and results of the implemented prop... (Complete abstract click electronic access below) / Mestre

Page generated in 0.1118 seconds