• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 28
  • 24
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 48
  • 31
  • 28
  • 26
  • 24
  • 23
  • 21
  • 20
  • 16
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

戴眼鏡對人臉辨識系統之影響

鄒博岱, Tsou , Po-Tai Unknown Date (has links)
本研究嘗試不全以負面假設來看待配戴眼鏡對人臉辨識的影響。吾人將以邊緣偵測圖為基礎,以邊點強度的分析來建立一套定位眼鏡的偵測系統。同時用偵測出的鏡框位置,以邊緣點的強度、密度比較的方式,定位眼睛的位置;並以前述兩套偵測演算,採擷其過程的資訊,進一步地定位鼻子與嘴巴的位置。這些演算形成一個簡易的人臉特徵定位系統,其將可處理配戴眼鏡的人臉;吾人也將進一步地經由其處理過程與結果,分析眼鏡對區域人臉辨識的影響,進而引導出非自然物件可能對人臉辨識的阻礙或輔助。 論文也將以全域比對法中的PCA與ICA演算法作一連串的實驗,剖析眼鏡對於全域辨識的影響;此外,亦用相同的方法來測試非自然物(眼鏡)、光源亮度與人臉角度對於人臉辨識阻礙的程度,以探究是否系統值得花費更大的代價,來移除眼鏡這個被一致認定的人臉辨識障礙,並得以在辨識演算法上獲得更高的效能。 / The objective of this thesis is to investigate the efficacy of face recognition systems when the subjects are wearing glasses. We do not presume that non-facial features such as glasses are nuisances. Instead, we will study whether the inclusion of glasses will have a positive impact on the face detection procedure and how it affects the feature extraction process. We will demonstrate how to use techniques based on local feature analysis to reduce the uncertainties in the matching result due to interferences around the eyes and nose caused by optical glasses. We have also conducted extensive experiments to analyze the effect of glasses on face recognition systems based on global matching strategy. Specifically, we perform both principal component analysis (PCA) and independent component analysis (ICA) on face databases with different percentage of subjects wearing eye glasses. It is concluded that external objects such as glasses will have a negative impact on face recognition using global analysis approaches. However, the adverse influences of illumination and pose are more conspicuous during the recognition process. Therefore, one should take caution when attempting to adapt the global matching scheme to handle the difficulties caused by glasses.
92

Table tennis event detection and classification

Oldham, Kevin M. January 2015 (has links)
It is well understood that multiple video cameras and computer vision (CV) technology can be used in sport for match officiating, statistics and player performance analysis. A review of the literature reveals a number of existing solutions, both commercial and theoretical, within this domain. However, these solutions are expensive and often complex in their installation. The hypothesis for this research states that by considering only changes in ball motion, automatic event classification is achievable with low-cost monocular video recording devices, without the need for 3-dimensional (3D) positional ball data and representation. The focus of this research is a rigorous empirical study of low cost single consumer-grade video camera solutions applied to table tennis, confirming that monocular CV based detected ball location data contains sufficient information to enable key match-play events to be recognised and measured. In total a library of 276 event-based video sequences, using a range of recording hardware, were produced for this research. The research has four key considerations: i) an investigation into an effective recording environment with minimum configuration and calibration, ii) the selection and optimisation of a CV algorithm to detect the ball from the resulting single source video data, iii) validation of the accuracy of the 2-dimensional (2D) CV data for motion change detection, and iv) the data requirements and processing techniques necessary to automatically detect changes in ball motion and match those to match-play events. Throughout the thesis, table tennis has been chosen as the example sport for observational and experimental analysis since it offers a number of specific CV challenges due to the relatively high ball speed (in excess of 100kph) and small ball size (40mm in diameter). Furthermore, the inherent rules of table tennis show potential for a monocular based event classification vision system. As the initial stage, a proposed optimum location and configuration of the single camera is defined. Next, the selection of a CV algorithm is critical in obtaining usable ball motion data. It is shown in this research that segmentation processes vary in their ball detection capabilities and location out-puts, which ultimately affects the ability of automated event detection and decision making solutions. Therefore, a comparison of CV algorithms is necessary to establish confidence in the accuracy of the derived location of the ball. As part of the research, a CV software environment has been developed to allow robust, repeatable and direct comparisons between different CV algorithms. An event based method of evaluating the success of a CV algorithm is proposed. Comparison of CV algorithms is made against the novel Efficacy Metric Set (EMS), producing a measurable Relative Efficacy Index (REI). Within the context of this low cost, single camera ball trajectory and event investigation, experimental results provided show that the Horn-Schunck Optical Flow algorithm, with a REI of 163.5 is the most successful method when compared to a discrete selection of CV detection and extraction techniques gathered from the literature review. Furthermore, evidence based data from the REI also suggests switching to the Canny edge detector (a REI of 186.4) for segmentation of the ball when in close proximity to the net. In addition to and in support of the data generated from the CV software environment, a novel method is presented for producing simultaneous data from 3D marker based recordings, reduced to 2D and compared directly to the CV output to establish comparative time-resolved data for the ball location. It is proposed here that a continuous scale factor, based on the known dimensions of the ball, is incorporated at every frame. Using this method, comparison results show a mean accuracy of 3.01mm when applied to a selection of nineteen video sequences and events. This tolerance is within 10% of the diameter of the ball and accountable by the limits of image resolution. Further experimental results demonstrate the ability to identify a number of match-play events from a monocular image sequence using a combination of the suggested optimum algorithm and ball motion analysis methods. The results show a promising application of 2D based CV processing to match-play event classification with an overall success rate of 95.9%. The majority of failures occur when the ball, during returns and services, is partially occluded by either the player or racket, due to the inherent problem of using a monocular recording device. Finally, the thesis proposes further research and extensions for developing and implementing monocular based CV processing of motion based event analysis and classification in a wider range of applications.
93

Segmentação de imagens SPECT/Gated-SPECT do miocárdio e geração de um mapa polar. / Segmentation of myocardial SPECT/Gated-SPECT images and polar map generation.

Paula, Luis Roberto Pereira de 23 May 2011 (has links)
Tomografia computadorizada por emissão de fóton único (SPECT) é uma modalidade da medicina nuclear baseada na medida da distribuição espacial de um radionuclídeo. Esta técnica é amplamente utilizada em cardiologia para avaliar problemas de perfusão miocárdica, relacionados ao fluxo sanguíneo nas artérias coronárias. As imagens SPECT proporcionam melhor separação das regiões do miocárdio e facilitam a localização e a definição dos defeitos de perfusão. Um dos grandes desafios em estudos SPECT é a eficiente apresentação da informação, uma vez que um único estudo pode gerar imagens de centenas de cortes a serem analisados. Para resolver este problema, são utilizados mapas polares (também conhecidos como gráficos Bulls Eye). Mapas polares são construídos a partir de cortes tomográficos do ventrículo esquerdo e apresentam as informações dos exames de forma sumarizada, em uma imagem bidimensional. Essa dissertação apresenta um método para segmentação do ventrículo esquerdo em estudos SPECT do miocárdio e a construção de mapas polares. A segmentação do ventrículo esquerdo é realizada para facilitar o processo de geração automática de mapas polares. O método desenvolvido utiliza a transformada watershed, no contexto do paradigma de Beucher-Meyer. Para visualização dos resultados, foi desenvolvida uma aplicação, chamada Medical Image Visualizer (MIV). O MIV será disponibilizado como projeto Open Source, podendo ser livremente utilizado e/ou modificado pela comunidade de usuários, desenvolvedores e pesquisadores. / Single photon emission computed tomography (SPECT) is a nuclear medicine tomographic imaging technique based on the measurement of spatial distribution of a radionuclide. This technique is widely used in cardiology to assess myocardial perfusion problems related to blood flow in coronary arteries. SPECT images provide better separation of regions of the myocardium and facilitate the location and definition of perfusion defects. One of the major challenges in SPECT studies is the efficient presentation of information, since a single study can generate hundreds of images of slices to be analyzed. To address this issue, polar maps (also known as Bulls Eye display) are used. Polar maps are built from slices of the left ventricle and provide summarized information of exams in a two dimensional image. This dissertation presents a method for the segmentation of the left ventricle in myocardial SPECT studies and the construction of polar maps. The segmentation of the left ventricle is performed to facilitate the process of automatic generation of polar maps. The method uses the watershed transform, in the context of the Beucher-Meyer paradigm. To display the results, it was developed an application called Medical Image Visualizer (MIV). MIV will be available as an Open Source project and the communities of users, developers and researchers will be able to freely use and/or modify the application.
94

Um estudo sobre a extraÃÃo de caracterÃsticas e a classificaÃÃo de imagens invariantes à rotaÃÃo extraÃdas de um sensor industrial 3D / A study on the extraction of characteristics and the classification of invariant images through the rotation of an 3D industrial sensor

Rodrigo Dalvit Carvalho da Silva 08 May 2014 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / Neste trabalho, à discutido o problema de reconhecimento de objetos utilizando imagens extraÃdas de um sensor industrial 3D. NÃs nos concentramos em 9 extratores de caracterÃsticas, dos quais 7 sÃo baseados nos momentos invariantes (Hu, Zernike, Legendre, Fourier-Mellin, Tchebichef, Bessel-Fourier e Gaussian-Hermite), um outro à baseado na Transformada de Hough e o Ãltimo na anÃlise de componentes independentes, e, 4 classificadores, Naive Bayes, k-Vizinhos mais PrÃximos, MÃquina de Vetor de Suporte e Rede Neural Artificial-Perceptron Multi-Camadas. Para a escolha do melhor extrator de caracterÃsticas, foram comparados os seus desempenhos de classificaÃÃo em termos de taxa de acerto e de tempo de extraÃÃo, atravÃs do classificador k-Vizinhos mais PrÃximos utilizando distÃncia euclidiana. O extrator de caracterÃsticas baseado nos momentos de Zernike obteve as melhores taxas de acerto, 98.00%, e tempo relativamente baixo de extraÃÃo de caracterÃsticas, 0.3910 segundos. Os dados gerados a partir deste, foram apresentados a diferentes heurÃsticas de classificaÃÃo. Dentre os classificadores testados, o classificador k-Vizinhos mais PrÃximos, obteve a melhor taxa mÃdia de acerto, 98.00% e, tempo mÃdio de classificaÃÃo relativamente baixo, 0.0040 segundos, tornando-se o classificador mais adequado para a aplicaÃÃo deste estudo. / In this work, the problem of recognition of objects using images extracted from a 3D industrial sensor is discussed. We focus in 9 feature extractors (where seven are based on invariant moments -Hu, Zernike, Legendre, Fourier-Mellin, Tchebichef, BesselâFourier and Gaussian-Hermite-, another is based on the Hough transform and the last one on independent component analysis), and 4 classifiers (Naive Bayes, k-Nearest Neighbor, Support Vector machines and Artificial Neural Network-Multi-Layer Perceptron). To choose the best feature extractor, their performance was compared in terms of classification accuracy rate and extraction time by the k-nearest neighbors classifier using euclidean distance. The feature extractor based on Zernike moments, got the best hit rates, 98.00 %, and relatively low time feature extraction, 0.3910 seconds. The data generated from this, were presented to different heuristic classification. Among the tested classifiers, the k-nearest neighbors classifier achieved the highest average hit rate, 98.00%, and average time of relatively low rank, 0.0040 seconds, thus making it the most suitable classifier for the implementation of this study.
95

Quest for new nuclear magic numbers with MINOS / Quête de nouveaux nombres magiques nucléaires avec MINOS

Santamaria, Clémentine 07 September 2015 (has links)
Le détecteur MINOS a été développé jusqu'à mi-2013 pour la spectroscopie γ prompte de noyaux très exotiques à partir de réactions d’arrachage de protons. Il est composé d'une cible épaisse d'hydrogène liquide pour augmenter la luminosité et d’une chambre à projection temporelle (TPC) pour reconstruire la position du vertex de réaction et de compenser l'effet de la cible épaisse sur la correction Doppler.La chambre à projection temporelle a été développée avec l'expertise du CEA-Irfu sur les détecteurs gazeux de type Micromegas. Dans un premier temps, différentes solutions pour la TPC ont été testées dans une chambre d'essai avec une source α et des mesures de rayons cosmiques. Des muons cosmiques ont été détectés pour la première fois en utilisant la chambre d'essai en début 2013 et ont validé l'utilisation d'un plan de détection Micromegas. Le premier prototype de TPC a été achevé en mai 2013 et nous avons utilisé un banc de rayons cosmiques pour estimer l’efficacité de la TPC.MINOS a ensuite été expédié au Japon et un test de performance sous faisceau a été réalisée à l'installation médicale HIMAC (Chiba, Japon) avec deux cibles minces au lieu de la cible épaisse d'hydrogène pour valider l'algorithme de reconstruction et la résolution de la position du vertex. Un algorithme de reconstruction de traces basé sur la transformée de Hough a été mis au point pour l'analyse des données, testé avec ces données, et comparé à des simulations.La première campagne de physique avec MINOS a eu lieu en mai 2014, avec SEASTAR. Elle s’est concentrée sur la première spectroscopie des ⁶ ⁶ Cr, ⁷⁰,⁷²Fe et ⁷⁸Ni. L'analyse de la spectroscopie du ⁶ ⁶Cr a révélé deux transitions, assignées aux deux premiers états excités. Une interprétation avec des calculs de modèle en couches montre que le maximum de collectivité quadripolaire se produit à N = 40 le long de la chaîne isotopique de chrome.Le ⁶ ⁶Cr est toujours placé dans la région de l’Îlot d’Inversion à N = 40 et les calculs de modèle en couches ainsi que la comparaison avec des calculs basés sur HFB suggèrent une extension de cet Îlot d’Inversion vers N = 50 en dessous du ⁷⁸Ni. L'analyse des ⁷⁰,⁷²Fe effectuée par C. Louchart (TU Darmstadt, Allemagne) révèle la même tendance que pour les isotopes de chrome. Les données et notre interprétation par le modèle en couches suggère une grande collectivité les Cr et Fe riches en neutrons, éventuellement jusqu'à N = 50, ce qui remettrait en cause la solidité de la fermeture de couche N = 50 en dessous du ⁷⁸Ni. / The MINOS device has been developed until mid-2013 for in-beam γ spectroscopy of very exotic nuclei from proton knockout reactions. It is composed of a thick liquid hydrogen target to achieve higher luminosities and a Time Projection Chamber (TPC) to reconstruct the vertex position and compensate for the thick target effect on the Doppler correction.The Time Projection Chamber has been developed with the expertise of CEA-IRFU in gas detectors and Micromegas detectors. At first, different solutions for the TPC were tested in a test chamber with an α source and cosmic-ray measurements. Cosmic rays were detected for the first time using the test chamber in early 2013 and validated the use of a Micromegas detection plane. The first TPC prototype was finished in May 2013, and we used a cosmic-ray bench to estimate the effiiciency of the TPC. The MINOS device was then shipped to Japan and an in-beam performance test was performed at the HIMAC medical facility (Chiba, Japan) with two thin targets instead of the thick hydrogen target to validate the tracking algorithm and the vertex position resolution. A tracking algorithm for the offline analysis based on the Hough transform has been developed, tested with the data, and compared with simulations.The first physics campaign using MINOS took place in May 2014 with SEASTAR. It focused on the first spectroscopy of ⁶ ⁶ Cr, ⁷⁰,⁷²Fe, and ⁷⁸Ni. The analysis of the ⁶⁶Cr spectroscopy revealed two transitions, assigned to the two first excited states. An interpretation with shell model calculations shows that the maximum of quadrupole collectivity occurs at N=40 along the Cr isotopic chain.⁶⁶Cr is still placed in the Island of Inversion region of N=40 and the shell model calculations as well as comparison with HFB-based calculations suggest an extension of this Island of Inversion towards N=50 below ⁷⁸Ni. The analysis of ⁷⁰,⁷²Fe performed by C. Louchart (TU Darmstadt, Germany) reveals the same trend as for Cr isotopes, with a maximum of deformation at N=42. The full data set and our shell-model interpretation suggests a large collectivity for neutron-rich Cr and Fe, possibly up to N=50, questioning the robustness of the N=50 shell closure below ⁷⁸Ni.
96

Αυτόματη ανίχνευση του αρτηριακού τοιχώματος της καρωτίδας από εικόνες υπερήχων β-σάρωσης

Ματσάκου, Αικατερίνη 10 August 2011 (has links)
Σε αυτή την εργασία παρουσιάζεται μια πλήρως αυτοματοποιημένη μεθοδολογία κατάτμησης για την ανίχνευση των ορίων του αρτηριακού τοιχώματος σε διαμήκεις εικόνες καρωτίδας β-σάρωσης. Συγκεκριμένα υλοποιείται ένας συνδυασμός της μεθοδολογίας του μετασχηματισμού Hough για την ανίχνευση ευθειών με μια μεθοδολογία ενεργών καμπυλών. Η μεθοδολογία του μετασχηματισμού Hough χρησιμοποιείται για τον ορισμό της αρχικής καμπύλης, η οποία στη συνέχεια παραμορφώνεται σύμφωνα με ένα μοντέλο ενεργών καμπυλών βασισμένων σε πεδίο ροής του διανύσματος κλίσης (Gradient Vector Flow - GVF). Το GVF μοντέλο ενεργών καμπυλών βασίζεται στον υπολογισμό του χάρτη ακμών της εικόνας και τον μετέπειτα υπολογισμό του διανυσματικού πεδίου ροής κλίσης, το οποίο με τη σειρά του προκαλεί την παραμόρφωση της αρχικής καμπύλης με σκοπό την εκτίμηση των πραγματικών ορίων του αρτηριακού τοιχώματος. Η προτεινόμενη μεθοδολογία εφαρμόστηκε σε είκοσι (20) εικόνες υγιών περιπτώσεων και δεκαοχτώ (18) εικόνες περιπτώσεων με αθηρωμάτωση για τον υπολογισμό της διαμέτρου του αυλού και την αξιολόγηση της μεθόδου από ποσοτικούς δείκτες ανάλυσης κατά ROC (Receiver Operating Characteristic – ROC). Σύμφωνα με τα αποτελέσματα, δεν παρατηρήθηκαν στατιστικά σημαντικές διαφορές ανάμεσα στις μετρήσεις της διαμέτρου που πραγματοποιήθηκαν από τη διαδικασία της αυτόματης ανίχνευσης και τις αντίστοιχες μετρήσεις που προέκυψαν από την χειροκίνητη ανίχνευση. Οι τιμές της ευαισθησίας, της ειδικότητας και της ακρίβειας στις υγιείς περιπτώσεις ήταν αντίστοιχα 0.97, 0.99 και 0.98 για τις διαστολικές και τις συστολικές εικόνες. Στις παθολογικές περιπτώσεις οι αντίστοιχες τιμές ήταν μεγαλύτερες από 0.89, 0.96 και 0.93. Συμπερασματικά, η προτεινόμενη μεθοδολογία αποτελεί μια ακριβή και αξιόπιστη μέθοδο κατάτμησης εικόνων καρωτίδας και μπορεί να χρησιμοποιηθεί στην κλινική πράξη. / In this thesis, a fully automatic segmentation method based on a combination of a combination of the Hough Transform for the detection of straight lines with active contours is presented, for detecting the carotid artery wall in longitudinal B-mode ultrasound images. A Hough-transform-based methodology is used for the definition of the initial snake, followed by a gradient vector flow (GVF) snake deformation. The GVF snake is based on the calculation of the image edge map and the calculation of the gradient vector flow field which guides its deformation for the estimation of the real arterial wall boundaries. The proposed methodology was applied in twenty and eighteen cases of healthy and atherosclerotic carotid respectively, in order to calculate the lumen diameter and evaluate the method by means of ROC analysis (Receiver Operating Characteristic – ROC). According to the results, there was no significant difference between the automated segmentation and the manual diameter measurements. In healthy cases the sensitivity, specificity and accuracy were 0.97, 0.99 and 0.98, respectively, for both diastolic and systolic phase. In atherosclerotic cases the calculated values of the indices were larger than 0.89, 0.96 and 0.93, respectively. In conclusion, the proposed methodology provides an accurate and reliable way to segment ultrasound images of the carotid wall and can be used in clinical practice.
97

Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisation

Grah, Joana Sarah January 2018 (has links)
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways: (i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to monitor behaviour of cells in cultures that have previously been treated with chemotherapy drugs, since atypical duration and outcome of mitosis, the process of cell division, can be an indicator of successfully working drugs. As an imaging modality we focus on phase contrast microscopy, hence avoiding phototoxicity and influence on cell behaviour. As a drawback, the common halo- and shade-off effect impede image analysis. We present a novel workflow uniting both automated mitotic cell detection with the Hough transform and subsequent cell tracking by a tailor-made level-set method in order to obtain statistics on length of mitosis and cell fates. The proposed image analysis pipeline is deployed in a MATLAB software package called MitosisAnalyser. For the detection of mitotic cells we use the circular Hough transform. This concept is investigated further in the framework of image regularisation in the general context of imaging inverse problems, in which circular objects should be enhanced, (ii) exploiting sparsity of first-order derivatives in combination with the linear circular Hough transform operation. Furthermore, (iii) we present a new unified higher-order derivative-type regularisation functional enforcing sparsity of a vector field related to an image to be reconstructed using curl, divergence and shear operators. The model is able to interpolate between well-known regularisers such as total generalised variation and infimal convolution total variation. Finally, (iv) we demonstrate how we can learn sparsity promoting parametrised regularisers via quotient minimisation, which can be motivated by generalised Eigenproblems. Learning approaches have recently become very popular in the field of inverse problems. However, the majority aims at fitting models to favourable training data, whereas we incorporate knowledge about both fit and misfit data. We present results resembling behaviour of well-established derivative-based sparse regularisers, introduce novel families of non-derivative-based regularisers and extend this framework to classification problems.
98

Sistema de visão para guiar um robô de manipulação de cabeçotes fundidos / Industrial robot guided by vision system to handling foundry head blocks

Semim, Ramon Cascaes 13 August 2012 (has links)
Made available in DSpace on 2016-12-12T17:38:31Z (GMT). No. of bitstreams: 1 Ramon Semim.pdf: 14297122 bytes, checksum: 53b76af6dde165965a87172ef45a1143 (MD5) Previous issue date: 2012-08-13 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work presents the development of low cost vision system to use on the robotized work cell installed in a foundry finishing line at Tupy S.A. The work cell will use an industrial robot to make the Oil bath and Palletizing of head blocks. The robot job will be moving the parts inside work station. The head blocks input at work cell through a conveyor that moves the parts to robot grasp area. The robotized cell will process 26 different kinds of parts and will be feeding by a human operator, so this is a problem because the head blocks position/orientation on the conveyor will be random. To development was used both low cost webcam and lightning modules. The vision system should find the head block model as well as find its position/orientation. These information are enough to robot grasp the head blocks on the input conveyor. The main image processing tools used on the project are the Pearson Correlation and the Hough Transform. System calibration defines the coordinate system position and orientation, as well as the pixel size. These parameters are important to find out the head block position/orientation. To grasp the parts the robot will reposition and reorient its coordinate system. The industrial robot used in this work is IRB6640, supplied by ABB. It has six degrees of freedom and uses quaternion to define the position/orientation of its end effector. / Este trabalho apresenta o desenvolvimento de um sistema de visão de baixo custo que será utilizado em uma célula robotizada, implantada em uma das linhas de acabamento de cabeçotes da Tupy S.A. Tal célula será responsável pelas operações de oleação e paletização de cabeçotes fundidos, e a função do robô será manipular as peças entre as estações de trabalho. Os cabeçotes entrarão na célula por meio de um transportador com roletes, onde o robô fará a pega das peças para iniciar a sua manipulação. A célula processará 26 tipos diferentes de cabeçotes e será abastecida manualmente, por isso as peças são disponibilizadas ao robô de forma aleatória. Foram empregados uma webcam e módulos de iluminação de baixo custo no projeto. O sistema de visão deve classificar os cabeçotes, de modo a identificar o modelo de cada peça analisada. Outra função do sistema de visão é encontrar a posição e a orientação de cada cabeçote que entrar na célula. Com essas informações o robô será capaz de efetuar a pega de forma correta, independentemente da orientação com a qual o cabeçote entrar na linha. As principais ferramentas de processamento de imagens utilizadas são a Correlação de Pearson e a Transformada de Hough. A calibração do sistema de visão define a posição e a orientação do sistema de coordenadas da câmera, bem como a dimensão de um pixel. Esses parâmetros são fundamentais para que a posição de pega do robô seja encontrada corretamente. Para que seja possível efetuar a pega dos cabeçotes o robô terá de reposicionar e reorientar o seu sistema de coordenadas. O robô usado é o modelo IRB6640, fabricado pela empresa ABB. Ele possui seis graus de liberdade e utiliza quatérnios para definir a orientação do seu efetuador final.
99

Mapeamento de ambientes estruturados com extra??o de informa??es geom?tricas atrav?s de dados sensoriais

Pedrosa, Diogo Pinheiro Fernandes 19 May 2006 (has links)
Made available in DSpace on 2014-12-17T14:54:48Z (GMT). No. of bitstreams: 1 DiogoPFP_Tese.pdf: 4402228 bytes, checksum: 17eacb6b5f1731f405518c976d32f701 (MD5) Previous issue date: 2006-05-19 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / The objective of this thesis is proposes a method for a mobile robot to build a hybrid map of an indoor, semi-structured environment. The topological part of this map deals with spatial relationships among rooms and corridors. It is a topology-based map, where the edges of the graph are rooms or corridors, and each link between two distinct edges represents a door. The metric part of the map consists in a set of parameters. These parameters describe a geometric figure which adapts to the free space of the local environment. This figure is calculated by a set of points which sample the boundaries of the local free space. These points are obtained with range sensors and with knowledge about the robot s pose. A method based on generalized Hough transform is applied to this set of points in order to obtain the geomtric figure. The building of the hybrid map is an incremental procedure. It is accomplished while the robot explores the environment. Each room is associated with a metric local map and, consequently, with an edge of the topo-logical map. During the mapping procedure, the robot may use recent metric information of the environment to improve its global or relative pose / Esta tese tem o objetivo de propor uma metodologia para constru??o de um mapa h?brido de um ambiente interno. A parte topol?gica da representa??o trata das rela??es de conectividade existentes entre as salas e corredores, sendo assim um grafo que representa a topologia do ambiente global. A parte m?trica consiste em armazenar um conjunto de par?metros que descreve uma figura geom?trica plana que melhor se ajusta ao espa?o livre local. Esta figura ? calculada atrav?s do conhecimento de pontos, ou amostras, dos limites do espa?o livre. Estes pontos s?o obtidos com sensores de dist?ncia e a informa??o ? complementada com a estimativa da pose do rob?. Uma vez que estes pontos est?o determinados, o rob? ent?o aplica uma ferramenta baseada na transformada generalizada de Hough para obter a figura em quest?o. O processo de constru??o do mapa ? incremental e totalmente realizado enquanto o rob? explora o ambiente. Cada sala ? representada por este mapa local e cada n? do grafo que representa a topologia do ambiente est? associado a este mapa. Durante o mapeamento o rob? pode utilizar as informa??es rec?m-adquiridas do ambiente para obter uma melhor estimativa de sua pose global ou relativa a uma sala ou corredor
100

Pupilometria dinâmica : uma proposta de rastreamento da posição e tamanho da pupila humana em tempo real

Dias, Alessandro Gontijo da Costa 17 January 2014 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / The study of the pupil\'s movements (contraction and dilatation) has a significant clinical interest because it is used as a method of evaluating both, the human visual system and nervous system. This paper presents a technique that can make the human pupil tracking in real time, both the position and the diameter. That is crucial in extracting parameters related to pupillary movements, and these help in diagnosis of various diseases, among other applications. The suggested technique uses the information of the pupil found in the previous frame, size and position, that are used in an algorithm that aims to reduce the time spent on the next frame tracking. Using this procedure it is possible to make the human pupil tracking in real time, in some cases with an average of up to 140 frames per second. Comparisons with other studies were performed both, for the processing time of each frame and for accuracy in pupil\'s location and diameter found in each image . The pupil\'s position tracking is crucial in extracting parameters related to pupillary movements, and these help in diagnosis of various diseases, among other applications. The present work does not track only the diameter but also the position of the pupil in real time. / O estudo dos movimentos (contração e dilatação) da pupila tem interesse clínico relevante, pois é utilizado como método de avaliação tanto do sistema visual humano quanto do sistema nervoso. Este trabalho apresenta uma técnica para efetuar o rastreamento tanto da posição quanto do diâmetro da pupila humana em tempo real. A técnica sugerida utiliza as informações da pupila encontrada no frame anterior, tamanho e posição, que são utilizadas em um algoritmo que objetiva a redução do tempo gasto no rastreamento do próximo frame. Utilizando este algoritmo é possível efetuar o rastreamento da posição e tamanho da pupila humana em tempo real e em alguns casos com uma média de até 140 frames por segundo. Foram realizadas comparações com outros trabalhos tanto em relação a tempo de processamento de cada frame, quanto em precisão na localização e diâmetro da pupila encontrada em cada imagem. O rastreamento do diâmetro da pupila é fundamental na extração de parâmetros relacionados aos movimentos pupilares, e estes auxiliam no diagnósticos de diversas doenças, entre outras aplicações. O presente trabalho faz não só o rastreamento do diâmetro mas também da posição da pupila em tempo real. / Mestre em Ciências

Page generated in 0.0417 seconds