111 |
Método de medição de alinhamento de suspensão veicular não intrusivo baseado em visão computacional / Not intrusive method for the measurement of alignment angles of vehicular suspension based on computer visionMingoto Junior, Carlos Roberto 21 August 2018 (has links)
Orientador: Paulo Roberto Gardel Kurka / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-21T02:23:47Z (GMT). No. of bitstreams: 1
MingotoJunior_CarlosRoberto_D.pdf: 4676498 bytes, checksum: f396cb633ba04f6ff1589cea747bb133 (MD5)
Previous issue date: 2012 / Resumo: O presente projeto de pesquisa aplica técnicas de visão estereoscópica computacional no desenvolvimento da configuração de um equipamento de medição de ângulos de alinhamento de suspensão veicular, usando câmeras de vídeo de baixo custo. Atualmente, a maioria dos dispositivos de medição de ângulos de alinhamento de suspensão de veículos baseia-se no uso de componentes eletromecânicos, como pêndulos resistivos, inclinômetros capacitivos, dispositivos opto-mecânicos (espelhos e raio de luz monocromática de baixa intensidade). Com a sequência aqui estabelecida dos fundamentos algébricos e técnicas de visão computacional, realizam-se estudos de viabilidade científica e proposta de construção de um equipamento de verificação de ângulos de alinhamento veicular. São apresentados testes virtuais e reais, ilustrativos da potencialidade operacional do equipamento / Abstract: This research project uses stereoscopic computer vision techniques to develop a system to measure alignment angles of vehicular suspensions, using low cost cameras. Currently, most of the devices intended to measure vehicular suspension angles are based on the use of electromechanical components, such as resistive pendulums, capacitive inclinometers or opticmechanical devices (mirrors and projection of beams of monochromatic light of low intensity). Fundaments of linear algebra and computer vision techniques, lead to studies of feasibility and practical implementation of a system used to measure vehicular suspension alignment angles. Virtual and real measurements are carried out to illustrate the operative potential of such a system / Doutorado / Mecanica dos Sólidos e Projeto Mecanico / Doutor em Engenharia Mecânica
|
112 |
Scale Invariant Object Recognition Using Cortical Computational Models and a Robotic PlatformVoils, Danny 01 January 2012 (has links)
This paper proposes an end-to-end, scale invariant, visual object recognition system, composed of computational components that mimic the cortex in the brain. The system uses a two stage process. The first stage is a filter that extracts scale invariant features from the visual field. The second stage uses inference based spacio-temporal analysis of these features to identify objects in the visual field. The proposed model combines Numenta's Hierarchical Temporal Memory (HTM), with HMAX developed by MIT's Brain and Cognitive Science Department. While these two biologically inspired paradigms are based on what is known about the visual cortex, HTM and HMAX tackle the overall object recognition problem from different directions. Image pyramid based methods like HMAX make explicit use of scale, but have no sense of time. HTM, on the other hand, only indirectly tackles scale, but makes explicit use of time. By combining HTM and HMAX, both scale and time are addressed. In this paper, I show that HTM and HMAX can be combined to make a com- plete cortex inspired object recognition model that explicitly uses both scale and time to recognize objects in temporal sequences of images. Additionally, through experimentation, I examine several variations of HMAX and its
|
113 |
Adaptive Estimation and Control with Application to Vision-based Autonomous Formation FlightSattigeri, Ramachandra Jayant 17 May 2007 (has links)
The role of vision as an additional sensing mechanism has received a lot of attention in recent years in the context of autonomous flight applications. Modern Unmanned Aerial Vehicles (UAVs) are equipped with vision sensors because of their light-weight, low-cost characteristics and also their ability to provide a rich variety of information of the environment in which the UAVs are navigating in. The problem of vision based autonomous flight is very difficult and challenging since it requires bringing together concepts from image processing and computer vision, target tracking and state estimation, and flight guidance and control.
This thesis focuses on the adaptive state estimation, guidance and control problems involved in vision-based formation flight. Specifically, the thesis presents a composite adaptation approach to the partial state estimation of a class of nonlinear systems with unmodeled dynamics. In this approach, a linear time-varying Kalman filter is the nominal state estimator which is augmented by the output of an adaptive neural network (NN) that is trained with two error signals. The benefit of the proposed approach is in its faster and more accurate adaptation to the modeling errors over a conventional approach.
The thesis also presents two approaches to the design of adaptive guidance and control (G&C) laws for line-of-sight formation flight. In the first approach, the guidance and autopilot systems are designed separately and then combined together by assuming time-scale separation. The second approach is based on integrating the guidance and autopilot design process. The developed G&C laws using both approaches are adaptive to unmodeled leader aircraft acceleration and to own aircraft aerodynamic uncertainties.
The thesis also presents theoretical justification based on Lyapunov-like stability analysis for integrating the adaptive state estimation and adaptive G&C designs. All the developed designs are validated in nonlinear, 6DOF fixed-wing aircraft simulations.
Finally, the thesis presents a decentralized coordination strategy for vision-based multiple-aircraft formation control. In this approach, each aircraft in formation regulates range from up to two nearest neighboring aircraft while simultaneously tracking nominal desired trajectories common to all aircraft and avoiding static obstacles.
|
114 |
Stochastically optimized monocular vision-based navigation and guidanceWatanabe, Yoko 07 December 2007 (has links)
The objective of this thesis is to design a relative navigation and guidance system for unmanned aerial vehicles (UAVs) for vision-based control applications. The vision-based navigation, guidance and control has been one of the most focused on research topics for the automation of UAVs. This is because in nature, birds and insects use vision as the exclusive sensor for object detection and navigation. In particular, this thesis studies the monocular vision-based navigation and guidance.
Since 2-D vision-based measurements are nonlinear with respect to the 3-D relative states, an extended Kalman filter (EKF) is applied in the navigation system design. The EKF-based navigation system is integrated with a real-time image processing algorithm and is tested in simulations and flight tests. The first closed-loop vision-based formation flight has been achieved. In addition, vision-based 3-D terrain recovery was performed in simulations.
A vision-based obstacle avoidance problem is specially addressed in this thesis. A navigation and guidance system is designed for a UAV to achieve a mission of waypoint tracking while avoiding unforeseen stationary obstacles by using vision information. A 3-D collision criterion is established by using a collision-cone approach. A minimum-effort guidance (MEG) law is applied for a guidance design, and it is shown that the control effort can be reduced by using the MEG-based guidance instead of a conventional guidance law. The system is evaluated in a 6 DoF flight simulation and also in a flight test.
For monocular vision-based control problems, vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Stochastic optimization is formulated to minimize the expected cost including the guidance error and the control effort. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization. Simulation results show that the suggested guidance law significantly improves the guidance performance. Furthermore, the OSA optimization is generalized as the n-step-ahead optimization for an arbitrary number of n, and their optimality and computational cost are investigated.
|
115 |
Efficient construction of multi-scale image pyramids for real-time embedded robot visionEntschev, Peter Andreas 16 December 2013 (has links)
Detectores de pontos de interesse, ou detectores de keypoints, têm sido de grande interesse para a área de visão robótica embarcada, especialmente aqueles que possuem robustez a variações geométricas, como rotação, transformações afins e mudanças em escala. A detecção de características invariáveis a escala é normalmente realizada com a construção de pirâmides de imagens em multiescala e pela busca exaustiva de extremos no espaço de escala, uma abordagem presente em métodos de reconhecimento de objetos como SIFT e SURF. Esses métodos são capazes de encontrar pontos de interesse bastante robustos, com propriedades adequadas para o reconhecimento de objetos, mas são ao mesmo tempo computacionalmente custosos. Nesse trabalho é apresentado um método eficiente para a construção de pirâmides de imagens em sistemas embarcados, como a plataforma BeagleBoard-xM, de forma similar ao método SIFT. O método aqui apresentado tem como objetivo utilizar técnicas computacionalmente menos custosas e a reutilização de informações previamente processadas de forma eficiente para reduzir a complexidade computacional. Para simplificar o processo de construção de pirâmides, o método utiliza filtros binomiais em substituição aos filtros Gaussianos convencionais utilizados no método SIFT original para calcular múltiplas escalas de uma imagem. Filtros binomiais possuem a vantagem de serem implementáveis utilizando notação ponto-fixo, o que é uma grande vantagem para muitos sistemas embarcados que não possuem suporte nativo a ponto-flutuante. A quantidade de convoluções necessária é reduzida pela reamostragem de escalas já processadas da pirâmide. Após a apresentação do método para construção eficiente de pirâmides, é apresentada uma maneira de implementação eficiente do método em uma plataforma SIMD (Single Instruction, Multiple Data, em português, Instrução Única, Dados Múltiplos) – a plataforma SIMD usada é a extensão ARM Neon disponível no processador ARM Cortex-A8 da BeagleBoard-xM. Plataformas SIMD em geral são muito úteis para aplicações multimídia, onde normalmente é necessário realizar a mesma operação em vários elementos, como pixels em uma imagem, permitindo que múltiplos dados sejam processados com uma única instrução do processador. Entretanto, a extensão Neon no processador Cortex-A8 não suporta operações em ponto-flutuante, tendo o método sido cuidadosamente implementado de forma a superar essa limitação. Por fim, alguns resultados sobre o método aqui proposto e método SIFT original são apresentados, incluindo seu desempenho em tempo de execução e repetibilidade de pontos de interesse detectados. Com uma implementação direta (sem o uso da plataforma SIMD), é mostrado que o método aqui apresentado necessita de aproximadamente 1/4 do tempo necessário para construir a pirâmide do método SIFT original, ao mesmo tempo em que repete até 86% dos pontos de interesse. Com uma abordagem completamente implementada em ponto-fixo (incluindo a vetorização com a plataforma SIMD) a repetibilidade chega a 92% dos pontos de interesse do método SIFT original, porém, reduzindo o tempo de processamento para menos de 3%. / Interest point detectors, or keypoint detectors, have been of great interest for embedded robot vision for a long time, especially those which provide robustness against geometrical variations, such as rotation, affine transformations and changes in scale. The detection of scale invariant features is normally done by constructing multi-scale image pyramids and performing an exhaustive search for extrema in the scale space, an approach that is present in object recognition methods such as SIFT and SURF. These methods are able to find very robust interest points with suitable properties for object recognition, but at the same time are computationally expensive. In this work we present an efficient method for the construction of SIFT-like image pyramids in embedded systems such as the BeagleBoard-xM. The method we present here aims at using computationally less expensive techniques and reusing already processed information in an efficient manner in order to reduce the overall computational complexity. To simplify the pyramid building process we use binomial filters instead of conventional Gaussian filters used in the original SIFT method to calculate multiple scales of an image. Binomial filters have the advantage of being able to be implemented by using fixed-point notation, which is a big advantage for many embedded systems that do not provide native floating-point support. We also reduce the amount of convolution operations needed by resampling already processed scales of the pyramid. After presenting our efficient pyramid construction method, we show how to implement it in an efficient manner in an SIMD (Single Instruction, Multiple Data) platform -- the SIMD platform we use is the ARM Neon extension available in the BeagleBoard-xM ARM Cortex-A8 processor. SIMD platforms in general are very useful for multimedia applications, where normally it is necessary to perform the same operation over several elements, such as pixels in images, enabling multiple data to be processed with a single instruction of the processor. However, the Neon extension in the Cortex-A8 processor does not support floating-point operations, so the whole method was carefully implemented to overcome this limitation. Finally, we provide some comparison results regarding the method we propose here and the original SIFT approach, including performance regarding execution time and repeatability of detected keypoints. With a straightforward implementation (without the use of the SIMD platform), we show that our method takes approximately 1/4 of the time taken to build the entire original SIFT pyramid, while repeating up to 86% of the interest points found with the original method. With a complete fixed-point approach (including vectorization within the SIMD platform) we show that repeatability reaches up to 92% of the original SIFT keypoints while reducing the processing time to less than 3%.
|
116 |
Controle de fixação atentivo para uma cabeça robótica com visão binocular / Attentive gaze control for a binocular robot headRoos, André Filipe 29 August 2016 (has links)
A pesquisa em visão computacional ainda está distante de replicar a adaptabilidade e o desempenho do Sistema Visual Humano. Grande parte das técnicas consolidadas são válidas apenas em cenas estáticas e condições restritivas. Cabeças robóticas representam um avanço em flexibilidade, pois carregam câmeras que podem ser movimentadas livremente para a exploração dos arredores. A observação artificial de um ambiente dinâmico exige a solução de pelo menos dois problemas: determinar quais informações perceptuais relevantes extrair dos sensores e como controlar seu movimento para mudar e manter a fixação de alvos com forma e movimento arbitrários. Neste trabalho, um sistema de controle de fixação binocular geral é proposto, e o subsistema responsável pela seleção de alvos e fixação de deslocamentos laterais é projetado, experimentado e avaliado em uma cabeça robótica com quatro graus de liberdade. O subsistema emprega um popular modelo de atenção visual de baixo nível para detectar o ponto mais saliente da cena e um controlador proporcional-integral gera um movimento conjuntivo das duas câmeras para centralizá-lo na imagem da câmera esquerda, assumida como dominante. O desenvolvimento do sistema envolveu primeiramente a modelagem física detalhada do mecanismo de pan e tilt das câmeras. Então, a estrutura linearizada obtida foi ajustada por mínimos quadrados aos dados experimentais de entrada-saída. Por fim, os ganhos do controlador foram sintonizados por otimização e ajuste manual. A implementação em C++ com a biblioteca OpenCV permitiu operação em tempo real a 30 Hz. Experimentos demonstram que o sistema é capaz de fixar alvos estáticos e altamente salientes sem conhecimento prévio ou fortes suposições. Alvos em movimento harmônico são perseguidos naturalmente, embora com defasamento. Em cenas visualmente densas, onde múltiplos alvos em potencial competem pela atenção, o sistema pode apresentar comportamento oscilatório, exigindo o ajuste fino dos pesos do algoritmo para operação suave. A adição de um controlador para o pescoço e de um controlador de vergência para a compensação de deslocamentos em profundidade são os próximos passos rumo a um observador artificial genérico. / Computer vision research is still far from replicating the adaptability and performance of the Human Visual System. Most of its consolidated techniques are valid only over static scenes and restrictive conditions. Robot heads represent an advance in terms of flexibility by carrying cameras that can be freely moved to explore the surroundings. Artificial observation of dynamic environments requires the solution of at least two problems: to determine what is the relevant perceptual information to be extracted from the sensors and how to control their movement in order to shift and hold gaze on targets featuring arbitrary shapes and motions. In this work, a general binocular gaze control system is proposed, and the subsystem responsible for targeting and following lateral displacements is designed, tested and assessed in a four degrees-of-freedom robot head. The subsystem employs a popular low-level visual attention model to detect the most salient point in the scene, and a proportional-integral controller generates a conjunctive movement of the cameras to center it in the left camera image, assumed to be dominant. The development started with a detailed physical modeling of the pan and tilt mechanism that drives the cameras. Then, the linearized structure obtained was fitted via least squares estimation to experimental input-output data. Finally, the controller gains were tuned by optimization and manual adjustment. The OpenCV-based implementation in C++ allowed real-time execution at 30 Hz. Experiments demonstrate that the system is capable of fixating highly salient and static targets without any prior knowledge or strong assumptions. Targets describing harmonic motion are naturally pursued, albeit with a phase shift. In cluttered scenes, where multiple potential targets compete for attention, the system may present oscillatory behavior, requiring fine adjustment of algorithm weights for smooth operation. The addition of a controller for the neck and a vergence controller to compensate for depth displacements are the next steps towards a generic artificial observer.
|
117 |
Vision-based moving pedestrian recognition from imprecise and uncertain data / Reconnaissance de piétons par vision à partir de données imprécises et incertainesZhou, Dingfu 05 December 2014 (has links)
La mise en oeuvre de systèmes avancés d’aide à la conduite (ADAS) basée vision, est une tâche complexe et difficile surtout d’un point de vue robustesse en conditions d’utilisation réelles. Une des fonctionnalités des ADAS vise à percevoir et à comprendre l’environnement de l’ego-véhicule et à fournir l’assistance nécessaire au conducteur pour réagir à des situations d’urgence. Dans cette thèse, nous nous concentrons sur la détection et la reconnaissance des objets mobiles car leur dynamique les rend plus imprévisibles et donc plus dangereux. La détection de ces objets, l’estimation de leurs positions et la reconnaissance de leurs catégories sont importants pour les ADAS et la navigation autonome. Par conséquent, nous proposons de construire un système complet pour la détection des objets en mouvement et la reconnaissance basées uniquement sur les capteurs de vision. L’approche proposée permet de détecter tout type d’objets en mouvement en fonction de deux méthodes complémentaires. L’idée de base est de détecter les objets mobiles par stéréovision en utilisant l’image résiduelle du mouvement apparent (RIMF). La RIMF est définie comme l’image du mouvement apparent causé par le déplacement des objets mobiles lorsque le mouvement de la caméra a été compensé. Afin de détecter tous les mouvements de manière robuste et de supprimer les faux positifs, les incertitudes liées à l’estimation de l’ego-mouvement et au calcul de la disparité doivent être considérées. Les étapes principales de l’algorithme sont les suivantes : premièrement, la pose relative de la caméra est estimée en minimisant la somme des erreurs de reprojection des points d’intérêt appariées et la matrice de covariance est alors calculée en utilisant une stratégie de propagation d’erreurs de premier ordre. Ensuite, une vraisemblance de mouvement est calculée pour chaque pixel en propageant les incertitudes sur l’ego-mouvement et la disparité par rapport à la RIMF. Enfin, la probabilité de mouvement et le gradient de profondeur sont utilisés pour minimiser une fonctionnelle d’énergie de manière à obtenir la segmentation des objets en mouvement. Dans le même temps, les boîtes englobantes des objets mobiles sont générées en utilisant la carte des U-disparités. Après avoir obtenu la boîte englobante de l’objet en mouvement, nous cherchons à reconnaître si l’objet en mouvement est un piéton ou pas. Par rapport aux algorithmes de classification supervisée (comme le boosting et les SVM) qui nécessitent un grand nombre d’exemples d’apprentissage étiquetés, notre algorithme de boosting semi-supervisé est entraîné avec seulement quelques exemples étiquetés et de nombreuses instances non étiquetées. Les exemples étiquetés sont d’abord utilisés pour estimer les probabilités d’appartenance aux classes des exemples non étiquetés, et ce à l’aide de modèles de mélange de gaussiennes après une étape de réduction de dimension réalisée par une analyse en composantes principales. Ensuite, nous appliquons une stratégie de boosting sur des arbres de décision entraînés à l’aide des instances étiquetées de manière probabiliste. Les performances de la méthode proposée sont évaluées sur plusieurs jeux de données de classification de référence, ainsi que sur la détection et la reconnaissance des piétons. Enfin, l’algorithme de détection et de reconnaissances des objets en mouvement est testé sur les images du jeu de données KITTI et les résultats expérimentaux montrent que les méthodes proposées obtiennent de bonnes performances dans différents scénarios de conduite en milieu urbain. / Vision-based Advanced Driver Assistance Systems (ADAS) is a complex and challenging task in real world traffic scenarios. The ADAS aims at perceiving andunderstanding the surrounding environment of the ego-vehicle and providing necessary assistance for the drivers if facing some emergencies. In this thesis, we will only focus on detecting and recognizing moving objects because they are more dangerous than static ones. Detecting these objects, estimating their positions and recognizing their categories are significantly important for ADAS and autonomous navigation. Consequently, we propose to build a complete system for moving objects detection and recognition based on vision sensors. The proposed approach can detect any kinds of moving objects based on two adjacent frames only. The core idea is to detect the moving pixels by using the Residual Image Motion Flow (RIMF). The RIMF is defined as the residual image changes caused by moving objects with compensated camera motion. In order to robustly detect all kinds of motion and remove false positive detections, uncertainties in the ego-motion estimation and disparity computation should also be considered. The main steps of our general algorithm are the following : first, the relative camera pose is estimated by minimizing the sum of the reprojection errors of matched features and its covariance matrix is also calculated by using a first-order errors propagation strategy. Next, a motion likelihood for each pixel is obtained by propagating the uncertainties of the ego-motion and disparity to the RIMF. Finally, the motion likelihood and the depth gradient are used in a graph-cut-based approach to obtain the moving objects segmentation. At the same time, the bounding boxes of moving object are generated based on the U-disparity map. After obtaining the bounding boxes of the moving object, we want to classify the moving objects as a pedestrian or not. Compared to supervised classification algorithms (such as boosting and SVM) which require a large amount of labeled training instances, our proposed semi-supervised boosting algorithm is trained with only a few labeled instances and many unlabeled instances. Firstly labeled instances are used to estimate the probabilistic class labels of the unlabeled instances using Gaussian Mixture Models after a dimension reduction step performed via Principal Component Analysis. Then, we apply a boosting strategy on decision stumps trained using the calculated soft labeled instances. The performances of the proposed method are evaluated on several state-of-the-art classification datasets, as well as on a pedestrian detection and recognition problem.Finally, both our moving objects detection and recognition algorithms are tested on the public images dataset KITTI and the experimental results show that the proposed methods can achieve good performances in different urban scenarios.
|
118 |
Bin Picking a robotické vidění / Bin Picking and Robotic VisionMúčka, Jan January 2019 (has links)
The aim of this master’s thesis is to describe the Robotic Vision for Bin Picking usage and creating an application for the realization of this task. This application will be able to distinguish several objects based on data from a camera with deep perception and should find the location of object, recognize it and determine its location and orientation. Bin Picking is one of the biggest challenges in today's automation.
|
119 |
Controle de fixação atentivo para uma cabeça robótica com visão binocular / Attentive gaze control for a binocular robot headRoos, André Filipe 29 August 2016 (has links)
A pesquisa em visão computacional ainda está distante de replicar a adaptabilidade e o desempenho do Sistema Visual Humano. Grande parte das técnicas consolidadas são válidas apenas em cenas estáticas e condições restritivas. Cabeças robóticas representam um avanço em flexibilidade, pois carregam câmeras que podem ser movimentadas livremente para a exploração dos arredores. A observação artificial de um ambiente dinâmico exige a solução de pelo menos dois problemas: determinar quais informações perceptuais relevantes extrair dos sensores e como controlar seu movimento para mudar e manter a fixação de alvos com forma e movimento arbitrários. Neste trabalho, um sistema de controle de fixação binocular geral é proposto, e o subsistema responsável pela seleção de alvos e fixação de deslocamentos laterais é projetado, experimentado e avaliado em uma cabeça robótica com quatro graus de liberdade. O subsistema emprega um popular modelo de atenção visual de baixo nível para detectar o ponto mais saliente da cena e um controlador proporcional-integral gera um movimento conjuntivo das duas câmeras para centralizá-lo na imagem da câmera esquerda, assumida como dominante. O desenvolvimento do sistema envolveu primeiramente a modelagem física detalhada do mecanismo de pan e tilt das câmeras. Então, a estrutura linearizada obtida foi ajustada por mínimos quadrados aos dados experimentais de entrada-saída. Por fim, os ganhos do controlador foram sintonizados por otimização e ajuste manual. A implementação em C++ com a biblioteca OpenCV permitiu operação em tempo real a 30 Hz. Experimentos demonstram que o sistema é capaz de fixar alvos estáticos e altamente salientes sem conhecimento prévio ou fortes suposições. Alvos em movimento harmônico são perseguidos naturalmente, embora com defasamento. Em cenas visualmente densas, onde múltiplos alvos em potencial competem pela atenção, o sistema pode apresentar comportamento oscilatório, exigindo o ajuste fino dos pesos do algoritmo para operação suave. A adição de um controlador para o pescoço e de um controlador de vergência para a compensação de deslocamentos em profundidade são os próximos passos rumo a um observador artificial genérico. / Computer vision research is still far from replicating the adaptability and performance of the Human Visual System. Most of its consolidated techniques are valid only over static scenes and restrictive conditions. Robot heads represent an advance in terms of flexibility by carrying cameras that can be freely moved to explore the surroundings. Artificial observation of dynamic environments requires the solution of at least two problems: to determine what is the relevant perceptual information to be extracted from the sensors and how to control their movement in order to shift and hold gaze on targets featuring arbitrary shapes and motions. In this work, a general binocular gaze control system is proposed, and the subsystem responsible for targeting and following lateral displacements is designed, tested and assessed in a four degrees-of-freedom robot head. The subsystem employs a popular low-level visual attention model to detect the most salient point in the scene, and a proportional-integral controller generates a conjunctive movement of the cameras to center it in the left camera image, assumed to be dominant. The development started with a detailed physical modeling of the pan and tilt mechanism that drives the cameras. Then, the linearized structure obtained was fitted via least squares estimation to experimental input-output data. Finally, the controller gains were tuned by optimization and manual adjustment. The OpenCV-based implementation in C++ allowed real-time execution at 30 Hz. Experiments demonstrate that the system is capable of fixating highly salient and static targets without any prior knowledge or strong assumptions. Targets describing harmonic motion are naturally pursued, albeit with a phase shift. In cluttered scenes, where multiple potential targets compete for attention, the system may present oscillatory behavior, requiring fine adjustment of algorithm weights for smooth operation. The addition of a controller for the neck and a vergence controller to compensate for depth displacements are the next steps towards a generic artificial observer.
|
Page generated in 0.727 seconds