• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 297
  • 297
  • 78
  • 69
  • 67
  • 61
  • 57
  • 51
  • 47
  • 43
  • 43
  • 41
  • 38
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Navegação terrestre usando unidade de medição inercial de baixo desempenho e fusão sensorial com filtro de Kalman adaptativo suavizado. / Terrestrial navigation using low-grade inertial measurement unit and sensor fusion with smoothed adaptive Kalman filter.

Santana, Douglas Daniel Sampaio 01 June 2011 (has links)
Apresenta-se o desenvolvimento de modelos matemáticos e algoritmos de fusão sensorial para navegação terrestre usando uma unidade de medição inercial (UMI) de baixo desempenho e o Filtro Estendido de Kalman. Os modelos foram desenvolvidos com base nos sistemas de navegação inercial strapdown (SNIS). O termo baixo desempenho refere-se à UMIs que por si só não são capazes de efetuar o auto- alinhamento por girocompassing. A incapacidade de se navegar utilizando apenas uma UMI de baixo desempenho motiva a investigação de técnicas que permitam aumentar o grau de precisão do SNIS com a utilização de sensores adicionais. Esta tese descreve o desenvolvimento do modelo completo de uma fusão sensorial para a navegação inercial de um veículo terrestre usando uma UMI de baixo desempenho, um hodômetro e uma bússola eletrônica. Marcas topográficas (landmarks) foram instaladas ao longo da trajetória de teste para se medir o erro da estimativa de posição nesses pontos. Apresenta-se o desenvolvimento do Filtro de Kalman Adaptativo Suavizado (FKAS), que estima conjuntamente os estados e o erro dos estados estimados do sistema de fusão sensorial. Descreve-se um critério quantitativo que emprega as incertezas de posição estimadas pelo FKAS para se determinar a priori, dado os sensores disponíveis, o intervalo de tempo máximo que se pode navegar dentro de uma margem de confiabilidade desejada. Conjuntos reduzidos de landmarks são utilizados como sensores fictícios para testar o critério de confiabilidade proposto. Destacam-se ainda os modelos matemáticos aplicados à navegação terrestre, unificados neste trabalho. Os resultados obtidos mostram que, contando somente com os sensores inerciais de baixo desempenho, a navegação terrestre torna-se inviável após algumas dezenas de segundos. Usando os mesmos sensores inerciais, a fusão sensorial produziu resultados muito superiores, permitindo reconstruir trajetórias com deslocamentos da ordem de 2,7 km (ou 15 minutos) com erro final de estimativa de posição da ordem de 3 m. / This work presents the development of the mathematical models and the algorithms of a sensor fusion system for terrestrial navigation using a low-grade inertial measurement unit (IMU) and the Extended Kalman Filter. The models were developed on the basis of the strapdown inertial navigation systems (SINS). Low-grade designates an IMU that is not able to perform girocompassing self-alignment. The impossibility of navigating relying on a low performance IMU is the motivation for investigating techniques to improve the SINS accuracy with the use of additional sensors. This thesis describes the development of a comprehensive model of a sensor fusion for the inertial navigation of a ground vehicle using a low-grade IMU, an odometer and an electronic compass. Landmarks were placed along the test trajectory in order to allow the measurement of the error of the position estimation at these points. It is presented the development of the Smoothed Adaptive Kalman Filter (SAKF), which jointly estimates the states and the errors of the estimated states of the sensor fusion system. It is presented a quantitative criteria which employs the position uncertainties estimated by SAKF in order to determine - given the available sensors, the maximum time interval that one can navigate within a desired reliability. Reduced sets of landmarks are used as fictitious sensors to test the proposed reliability criterion. Also noteworthy are the mathematical models applied to terrestrial navigation that were unified in this work. The results show that, only relying on the low performance inertial sensors, the terrestrial navigation becomes impracticable after few tens of seconds. Using the same inertial sensors, the sensor fusion produced far better results, allowing the reconstruction of trajectories with displacements of about 2.7 km (or 15 minutes) with a final error of position estimation of about 3 m.
212

Modelo para a classificação da qualidade da água contaminada por solo usando indução por árvore de decisão. / Classification model for soil-contaminated water quality using decision tree induction.

Dota, Mara Andréa 12 September 2014 (has links)
A possibilidade de avaliar remotamente e de forma instantânea alterações na qualidade das águas em função da entrada de solos permite o monitoramento de processos ecológicos como o assoreamento, perdas e solos, carreamento de pesticidas e degradação de habitats aquáticos. Com a utilização de um modelo automatizado, torna-se possível um monitoramento em tempo real remoto coletando dados por meio de Redes de Sensores Sem Fio. Esta pesquisa propõe um modelo de classificação da qualidade da água contaminada por solo usando técnicas de Árvore de Decisão. Com este modelo torna-se possível acompanhar alterações que venham a ocorrer em águas superficiais indicando o nível de contaminação por solo com maior rapidez do que a forma convencional que necessita de análise em laboratório e coleta de amostra manual. A classificação proposta considera sete classes de qualidade da água, conforme dados de um experimento conduzido em laboratório. Foram utilizadas técnicas de Inteligência Artificial com o intuito de realizar a Fusão de Sensores para avaliar, em tempo real, as leituras dos sensores, indicando a qual classe de qualidade a amostra se enquadra. Na verificação de quantas classes seria o ideal, utilizou-se o algoritmo k-means++. Para a construção do modelo de classificação foram usadas técnicas de Indução por Árvore de Decisão, tais como: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Os testes realizados indicam que a classificação proposta é coerente, visto que os diferentes algoritmos comprovaram uma relação estatística forte entre as instâncias das classes, garantindo que o modelo proposto irá predizer saídas para entradas de dados desconhecidas com acurácia. Os algoritmos com melhores resultados foram FT, J48graft e J48. / The possibility to remotely and instantaneously evaluate changes in water quality due to soil contamination allows monitoring ecological processes such as siltation, soil losses, loading of pesticides and degradation of aquatic habitats. Using an automated model to classify soil-contaminated water quality allows for a remote realtime monitoring by collecting data using Wireless Sensor Networks. This study proposes a model to classify soil-contaminated water quality by using Decision Tree techniques. With this model, it is possible to track changes that may occur in surface waters indicating the level of contamination by soil faster than the conventional way, which requires laboratory analysis and manual sampling. The classification proposed considers seven classes of water quality, according to data from an experiment carried out in laboratory. Artificial Intelligence techniques were used in order to implement Sensor Fusion to evaluate, in real time, sensor readings to which class the sample quality fits. By checking how many classes would be ideal, the k-means + + algorithm was used. To build the classification model, Decision Tree Induction techniques were used, such as: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Tests indicated that the proposed classification is consistent because different algorithms results confirmed a strong statistical relationship between instances of classes, ensuring that this model will predict outputs to unknown inputs accurately. The algorithms with best results were FT, J48graft and J48.
213

Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos / Obstacle detection using 3D perception and radar data fusion in automotive vehicles

Rosero, Luis Alberto Rosero 30 January 2017 (has links)
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores. / This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
214

Monitoramento de operações de retificação usando fusão de sensores / Monitoring of operations the rectification using sensors of fusion

Schühli, Luciano Alcindo 02 August 2007 (has links)
O presente trabalho trata da análise experimental de um sistema de monitoramento baseado na técnica de fusão de sensores, aplicado em uma retificadora cilíndrica externa. A fusão é realizada entre os sinais de potência e emissão acústica para obtenção do parâmetro FAP (Fast Abrasive Power) através do método desenvolvido por Valente (2003). Através da simulação de problemas encontrados nos processos de retificação (falha de sobremetal, colisão, desbalanceamento e vibração), foram captados os sinais de potência e emissão acústica e a partir destes gerado o parâmetro FAP, comparando seu desempenho, na detecção dos problemas, com os outros dois sinais. Para a análise foram construídos os gráficos das variações dos sinais em relação ao tempo de execução do processo e os mapas do FAP e acústico. O sistema de monitoramento avaliado tem como característica baixa complexidade de instalação e execução. Os dados experimentais revelam que o FAP apresenta uma velocidade de resposta maior que a potência e levemente amortecida em relação à emissão acústica. O nível do seu sinal é igual ao da potência mantendo-se homogêneo durante o processo, ao contrário da emissão acústica que pode ser influenciada por diversos outros parâmetros, tais como geometria da peça, distância do sensor, montagem do sensor, entre outros, que independem da interação ferramenta-peça. O resultado é uma resposta dinâmica e confiável, associada à energia do sistema. Estas características são interessantes para o monitoramento de processos de retificação (excluindo a dressagem) sendo superiores àquelas apresentadas isoladamente pelos sinais de potência e emissão acústica. / The present study deals with an experimental analysis of a monitoring system based on a sensor fusion strategy applied to a cylindrical grinding machine. It comprises a fusion of the power and acoustic emission signals and has as main goal to obtain the FAP (Fast Abrasive Power) using the method developed by Valente (2003). Initially, the power and acoustic emission signals were captured under operational dysfunction conditions during the grinding process (stock imperfection, collision, unbalancing e vibration). Then, based on these signals, the FAP parameter was generated and its capability in characterizing operational dysfunctions evaluated against the performance of an individual analysis of the power and acoustic emission signals. For this analysis, FAP and acoustic maps plus plots showing the FAP signals vs. working time were implemented. The experimental data revealed that the FAP presents a faster response than the power signal and a slightly dumped response when compared against the acoustic signal. The signal level of the FAP is similar to the power signal and is homogenous during the machining process. On contrary to the FAP, the acoustic emission signal may be affected by parameters that are not related to the tool-workpiece interactions, workpiece geometry and sensor positioning. The dynamic response of FAP is reliable and linked to the energy of the system. Finally, it should be highlightened that the monitoring system based on the FAP parameter presents low complexity in both implementation and execution. Such characteristics are superior to those observed when using either the power or acoustic emission signals and highly valuable in a system designed to monitor grinding processes.
215

Controle de posição com múltiplos sensores em um robô colaborativo utilizando liquid state machines

Sala, Davi Alberto January 2017 (has links)
A ideia de usar redes neurais biologicamente inspiradas na computação tem sido amplamente utilizada nas últimas décadas. O fato essencial neste paradigma é que um neurônio pode integrar e processar informações, e esta informação pode ser revelada por sua atividade de pulsos. Ao descrever a dinâmica de um único neurônio usando um modelo matemático, uma rede pode ser implementada utilizando um conjunto desses neurônios, onde a atividade pulsante de cada neurônio irá conter contribuições, ou informações, da atividade pulsante da rede em que está inserido. Neste trabalho é apresentado um controlador de posição no eixo Z utilizando fusão de sensores baseado no paradigma de Redes Neurais Recorrentes. O sistema proposto utiliza uma Máquina de Estado Líquido (LSM) para controlar o robô colaborativo BAXTER. O framework foi projetado para trabalhar em paralelo com as LSMs que executam trajetórias em formas fechadas de duas dimensões, com o objetivo de manter uma caneta de feltro em contato com a superfície de desenho, dados de sensores de força e distância são alimentados ao controlador. O sistema foi treinado utilizando dados de um controlador Proporcional Integral Derivativo (PID), fundindo dados de ambos sensores. Resultados mostram que a LSM foi capaz de aprender o comportamento do controlador PID em diferentes situações. / The idea of employing biologically inspired neural networks to perform computation has been widely used over the last decades. The essential fact in this paradigm is that a neuron can integrate and process information, and this information can be revealed by its spiking activity. By describing the dynamics of a single neuron using a mathematical model, a network in which the spiking activity of every single neuron will get contributions, or information, from the spiking activity of the embedded network. A positioning controller based on Spiking Neural Networks for sensor fusion suitable to run on a neuromorphic computer is presented in this work. The proposed framework uses the paradigm of reservoir computing to control the collaborative robot BAXTER. The system was designed to work in parallel with Liquid State Machines that performs trajectories in 2D closed shapes. In order to keep a felt pen touching a drawing surface, data from sensors of force and distance are fed to the controller. The system was trained using data from a Proportional Integral Derivative controller, merging the data from both sensors. The results show that the LSM can learn the behavior of a PID controller on di erent situations.
216

Modelo para a classificação da qualidade da água contaminada por solo usando indução por árvore de decisão. / Classification model for soil-contaminated water quality using decision tree induction.

Mara Andréa Dota 12 September 2014 (has links)
A possibilidade de avaliar remotamente e de forma instantânea alterações na qualidade das águas em função da entrada de solos permite o monitoramento de processos ecológicos como o assoreamento, perdas e solos, carreamento de pesticidas e degradação de habitats aquáticos. Com a utilização de um modelo automatizado, torna-se possível um monitoramento em tempo real remoto coletando dados por meio de Redes de Sensores Sem Fio. Esta pesquisa propõe um modelo de classificação da qualidade da água contaminada por solo usando técnicas de Árvore de Decisão. Com este modelo torna-se possível acompanhar alterações que venham a ocorrer em águas superficiais indicando o nível de contaminação por solo com maior rapidez do que a forma convencional que necessita de análise em laboratório e coleta de amostra manual. A classificação proposta considera sete classes de qualidade da água, conforme dados de um experimento conduzido em laboratório. Foram utilizadas técnicas de Inteligência Artificial com o intuito de realizar a Fusão de Sensores para avaliar, em tempo real, as leituras dos sensores, indicando a qual classe de qualidade a amostra se enquadra. Na verificação de quantas classes seria o ideal, utilizou-se o algoritmo k-means++. Para a construção do modelo de classificação foram usadas técnicas de Indução por Árvore de Decisão, tais como: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Os testes realizados indicam que a classificação proposta é coerente, visto que os diferentes algoritmos comprovaram uma relação estatística forte entre as instâncias das classes, garantindo que o modelo proposto irá predizer saídas para entradas de dados desconhecidas com acurácia. Os algoritmos com melhores resultados foram FT, J48graft e J48. / The possibility to remotely and instantaneously evaluate changes in water quality due to soil contamination allows monitoring ecological processes such as siltation, soil losses, loading of pesticides and degradation of aquatic habitats. Using an automated model to classify soil-contaminated water quality allows for a remote realtime monitoring by collecting data using Wireless Sensor Networks. This study proposes a model to classify soil-contaminated water quality by using Decision Tree techniques. With this model, it is possible to track changes that may occur in surface waters indicating the level of contamination by soil faster than the conventional way, which requires laboratory analysis and manual sampling. The classification proposed considers seven classes of water quality, according to data from an experiment carried out in laboratory. Artificial Intelligence techniques were used in order to implement Sensor Fusion to evaluate, in real time, sensor readings to which class the sample quality fits. By checking how many classes would be ideal, the k-means + + algorithm was used. To build the classification model, Decision Tree Induction techniques were used, such as: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Tests indicated that the proposed classification is consistent because different algorithms results confirmed a strong statistical relationship between instances of classes, ensuring that this model will predict outputs to unknown inputs accurately. The algorithms with best results were FT, J48graft and J48.
217

Multi-modal recognition of manipulation activities through visual accelerometer tracking, relational histograms, and user-adaptation

Stein, Sebastian January 2014 (has links)
Activity recognition research in computer vision and pervasive computing has made a remarkable trajectory from distinguishing full-body motion patterns to recognizing complex activities. Manipulation activities as occurring in food preparation are particularly challenging to recognize, as they involve many different objects, non-unique task orders and are subject to personal idiosyncrasies. Video data and data from embedded accelerometers provide complementary information, which motivates an investigation of effective methods for fusing these sensor modalities. This thesis proposes a method for multi-modal recognition of manipulation activities that combines accelerometer data and video at multiple stages of the recognition pipeline. A method for accelerometer tracking is introduced that provides for each accelerometer-equipped object a location estimate in the camera view by identifying a point trajectory that matches well the accelerometer data. It is argued that associating accelerometer data with locations in the video provides a key link for modelling interactions between accelerometer-equipped objects and other visual entities in the scene. Estimates of accelerometer locations and their visual displacements are used to extract two new types of features: (i) Reference Tracklet Statistics characterizes statistical properties of an accelerometer's visual trajectory, and (ii) RETLETS, a feature representation that encodes relative motion, uses an accelerometer's visual trajectory as a reference frame for dense tracklets. In comparison to a traditional sensor fusion approach where features are extracted from each sensor-type independently and concatenated for classification, it is shown that combining RETLETS and Reference Tracklet Statistics with those sensor-specific features performs considerably better. Specifically addressing scenarios in which a recognition system would be primarily used by a single person (e.g., cognitive situational support), this thesis investigates three methods for adapting activity models to a target user based on user-specific training data. Via randomized control trials it is shown that these methods indeed learn user idiosyncrasies. All proposed methods are evaluated on two new challenging datasets of food preparation activities that have been made publicly available. Both datasets feature a novel combination of video and accelerometers attached to objects. The Accelerometer Localization dataset is the first publicly available dataset that enables quantitative evaluation of accelerometer tracking algorithms. The 50 Salads dataset contains 50 sequences of people preparing mixed salads with detailed activity annotations.
218

Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets / Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition

Portaz, Maxime 24 October 2018 (has links)
Dans le cadre du projet GUIMUTEIC, qui vise à équiper les visiteurs de musées d'un outils d'aide à la visite équipé d'une caméra, cette thèse adresse le problème d'accès à l'information en mobilité.On s'intéresse à comment rendre l'information à propos des œuvres accessible et automatique aux visiteurs de lieux touristiques.Elle s'inscrit dans le cadre du projet GUIMUTEIC, qui vise à équiper les visiteurs de musées d'un outil d'aide à l'accès à l'information en mobilité.Être capable de déterminer si le visiteur désire avoir accès à l'information signifie identifier le contexte autour de lui, afin de fournir une réponse adaptée, et réagir à ses actions.Ceci soulève les problématiques d'identification de points d'intérêts, pour déterminer le contexte, et d'identification de gestes de utilisateurs, pour répondre à ses demandes.Dans le cadre du notre projet, le visiteur est donc équipé d'une caméra embarquée.L'objectif est de fournir un solution à l'aide à la visite, en developpant des méthodes de vision pour l'identification d'objet, et de detection de gestes dans les vidéos à la première personne.Nous proposons dans cette thèse une étude de la faisabilité et de l'intérêt de l'aide à la visite, ainsi que de la pertinence des gestes dans le cadre de l'interaction avec un système embarqué.Nous proposons une nouvelle approche pour l'identification d'objets grâce à des réseaux de neurones profonds siamois pour l'apprentissage de similarité entre les images, avec apprentissage des régions d'intérêt dans l'image.Nous explorons également l'utilisation de réseaux à taille réduite pour le détection de gestes en mobilité.Nous présentons pour cela une architecture utilisant un nouveau type de bloc de convolutions, pour réduire le nombre de paramètres du réseau et permettre son utilisation sur processeur mobile.Pour évaluer nos propositions, nous nous appuyons sur plusieurs corpus de recherche d'image et de gestes, crée spécialement pour correspondre aux contraintes du projet. / This thesis is part of the GUIMUTEIC project, which aim is to equip museum tourist with an audio-guide enhanced by a camera.This thesis adress the problem of information access in mobile environment, by automaticaly providing information about museum artefacts.To be able to give this information, we need to know when the visitor desire guidance, and what he is looking at, to give the correct response.This raises issues of identification of points of interest, to determine the context, and identification of user gestures, to meet his demands.As part of our project, the visitor is equipped with an embedded camera.The goal is to provide a solution to help with the visit, developing vision methods for object identification, and gesture detection in first-person videos.We propose in this thesis a study of the feasibility and the interest of the assistance to the visit, as well as the relevance of the gestures in the context of the interaction with an embedded system.We propose a new approach for objects identification thanks to siamese neural networks to learn images similarity and define regions of interest.We are also exploring the use of small networks for gesture recognition in mobility.We present for this an architecture using new types of convolution blocks, to reduce the number of parameters of the network and allow its use on mobile processor.To evaluate our proposals, we rely on several corpus of image search and gestures, specificaly designed to match the constraints of the project.
219

The Application of Index Based, Region Segmentation, and Deep Learning Approaches to Sensor Fusion for Vegetation Detection

Stone, David L. 01 January 2019 (has links)
This thesis investigates the application of index based, region segmentation, and deep learning methods to the sensor fusion of omnidirectional (O-D) Infrared (IR) sensors, Kinnect sensors, and O-D vision sensors to increase the level of intelligent perception for unmanned robotic platforms. The goals of this work is first to provide a more robust calibration approach and improve the calibration of low resolution and noisy IR O-D cameras. Then our goal was to explore the best approach to sensor fusion for vegetation detection. We looked at index based, region segmentation, and deep learning methods and compared them with a goal of significant reduction in false positives while maintaining reasonable vegetation detection. The results are as follows: Direct Spherical Calibration of the IR camera provided a more consistent and robust calibration board capture and resulted in the best overall calibration results with sub-pixel accuracy The best approach for sensor fusion for vegetation detection was the deep learning approach, the three methods are detailed in the following chapters with the results summarized here. Modified Normalized Difference Vegetation Index approach achieved 86.74% recognition and 32.5% false positive, with peaks to 80% Thermal Region Fusion (TRF) achieved a lower recognition rate at 75.16% but reduced false positives to 11.75% (a 64% reduction) Our Deep Learning Fusion Network (DeepFuseNet) results demonstrated that deep learning approach showed the best results with a significant (92%) reduction in false positives when compared to our modified normalized difference vegetation index approach. The recognition was 95.6% with 2% false positive. Current approaches are primarily focused on O-D color vision for localization, mapping, and tracking and do not adequately address the application of these sensors to vegetation detection. We will demonstrate the contradiction between current approaches and our deep sensor fusion (DeepFuseNet) for vegetation detection. The combination of O-D IR and O-D color vision coupled with deep learning for the extraction of vegetation material type, has great potential for robot perception. This thesis will look at two architectures: 1) the application of Autoencoders Feature Extractors feeding a deep Convolution Neural Network (CNN) fusion network (DeepFuseNet), and 2) Bottleneck CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors. We show that the vegetation recognition rate and the number of false detects inherent in the classical indices based spectral decomposition are greatly improved using our DeepFuseNet architecture. We first investigate the calibration of omnidirectional infrared (IR) camera for intelligent perception applications. The low resolution omnidirectional (O-D) IR image edge boundaries are not as sharp as with color vision cameras, and as a result, the standard calibration methods were harder to use and less accurate with the low definition of the omnidirectional IR camera. In order to more fully address omnidirectional IR camera calibration, we propose a new calibration grid center coordinates control point discovery methodology and a Direct Spherical Calibration (DSC) approach for a more robust and accurate method of calibration. DSC addresses the limitations of the existing methods by using the spherical coordinates of the centroid of the calibration board to directly triangulate the location of the camera center and iteratively solve for the camera parameters. We compare DSC to three Baseline visual calibration methodologies and augment them with additional output of the spherical results for comparison. We also look at the optimum number of calibration boards using an evolutionary algorithm and Pareto optimization to find the best method and combination of accuracy, methodology and number of calibration boards. The benefits of DSC are more efficient calibration board geometry selection, and better accuracy than the three Baseline visual calibration methodologies. In the context of vegetation detection, the fusion of omnidirectional (O-D) Infrared (IR) and color vision sensors may increase the level of vegetation perception for unmanned robotic platforms. A literature search found no significant research in our area of interest. The fusion of O-D IR and O-D color vision sensors for the extraction of feature material type has not been adequately addressed. We will look at augmenting indices based spectral decomposition with IR region based spectral decomposition to address the number of false detects inherent in indices based spectral decomposition alone. Our work shows that the fusion of the Normalized Difference Vegetation Index (NDVI) from the O-D color camera fused with the IR thresholded signature region associated with the vegetation region, minimizes the number of false detects seen with NDVI alone. The contribution of this work is the demonstration of two new techniques, Thresholded Region Fusion (TRF) technique for the fusion of O-D IR and O-D Color. We also look at the Kinect vision sensor fused with the O-D IR camera. Our experimental validation demonstrates a 64% reduction in false detects in our method compared to classical indices based detection. We finally compare our DeepFuseNet results with our previous work with Normalized Difference Vegetation index (NDVI) and IR region based spectral fusion. This current work shows that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet deep learning approach out performs the previous NVDI fused with far infrared region segmentation. Our experimental validation demonstrates an 92% reduction in false detects in our method compared to classical indices based detection. This work contributes a new technique for the fusion of O-D vision and O-D IR sensors using two deep CNN feature extractors feeding into a fully connected CNN Network (DeepFuseNet).
220

Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework / Localisation de véhicules intelligents par fusion de données multi-capteurs en milieu urbain

Wei, Lijun 17 July 2013 (has links)
Afin d’améliorer la précision des systèmes de navigation ainsi que de garantir la sécurité et la continuité du service, il est essentiel de connaitre la position et l’orientation du véhicule en tout temps. La localisation absolue utilisant des systèmes satellitaires tels que le GPS est souvent utilisée `a cette fin. Cependant, en environnement urbain, la localisation `a l’aide d’un récepteur GPS peut s’avérer peu précise voire même indisponible `a cause des phénomènes de réflexion des signaux, de multi-trajet ou de la faible visibilité satellitaire. Afin d’assurer une estimation précise et robuste du positionnement, d’autres capteurs et méthodes doivent compléter la mesure. Dans cette thèse, des méthodes de localisation de véhicules sont proposées afin d’améliorer l’estimation de la pose en prenant en compte la redondance et la complémentarité des informations du système multi-capteurs utilisé. Tout d’abord, les mesures GPS sont fusionnées avec des estimations de la localisation relative du véhicule obtenues `a l’aide d’un capteur proprioceptif (gyromètre), d’un système stéréoscopique(Odométrie visuelle) et d’un télémètre laser (recalage de scans télémétriques). Une étape de sélection des capteurs est intégrée pour valider la cohérence des observations provenant des différents capteurs. Seules les informations validées sont combinées dans un formalisme de couplage lâche avec un filtre informationnel. Si l’information GPS est indisponible pendant une longue période, la trajectoire estimée par uniquement les approches relatives tend `a diverger, en raison de l’accumulation de l’erreur. Pour ces raisons, les informations d’une carte numérique (route + bâtiment) ont été intégrées et couplées aux mesures télémétriques de deux télémètres laser montés sur le toit du véhicule (l’un horizontalement, l’autre verticalement). Les façades des immeubles détectées par les télémètres laser sont associées avec les informations_ bâtiment _ de la carte afin de corriger la position du véhicule.Les approches proposées sont testées et évaluées sur des données réelles. Les résultats expérimentaux obtenus montrent que la fusion du système stéréoscopique et du télémètre laser avec le GPS permet d’assurer le service de localisation lors des courtes absences de mesures GPS et de corriger les erreurs GPS de type saut. Par ailleurs, la prise en compte des informations de la carte numérique routière permet d’obtenir une approximation de la position du véhicule en projetant la position du véhicule sur le tronc¸on de route correspondant et enfin l’intégration de la carte numérique des bâtiments couplée aux données télémétriques permet d’affiner cette estimation, en particulier la position latérale. / In some dense urban environments (e.g., a street with tall buildings around), vehicle localization result provided by Global Positioning System (GPS) receiver might not be accurate or even unavailable due to signal reflection (multi-path) or poor satellite visibility. In order to improve the accuracy and robustness of assisted navigation systems so as to guarantee driving security and service continuity on road, a vehicle localization approach is presented in this thesis by taking use of the redundancy and complementarities of multiple sensors. At first, GPS localization method is complemented by onboard dead-reckoning (DR) method (inertial measurement unit, odometer, gyroscope), stereovision based visual odometry method, horizontal laser range finder (LRF) based scan alignment method, and a 2D GIS road network map based map-matching method to provide a coarse vehicle pose estimation. A sensor selection step is applied to validate the coherence of the observations from multiple sensors, only information provided by the validated sensors are combined under a loosely coupled probabilistic framework with an information filter. Then, if GPS receivers encounter long term outages, the accumulated localization error of DR-only method is proposed to be bounded by adding a GIS building map layer. Two onboard LRF systems (a horizontal LRF and a vertical LRF) are mounted on the roof of the vehicle and used to detect building facades in urban environment. The detected building facades are projected onto the 2D ground plane and associated with the GIS building map layer to correct the vehicle pose error, especially for the lateral error. The extracted facade landmarks from the vertical LRF scan are stored in a new GIS map layer. The proposed approach is tested and evaluated with real data sequences. Experimental results with real data show that fusion of the stereoscopic system and LRF can continue to localize the vehicle during GPS outages in short period and to correct the GPS positioning error such as GPS jumps; the road map can help to obtain an approximate estimation of the vehicle position by projecting the vehicle position on the corresponding road segment; and the integration of the building information can help to refine the initial pose estimation when GPS signals are lost for long time.

Page generated in 0.2196 seconds