• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 168
  • 25
  • 11
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 269
  • 269
  • 70
  • 64
  • 62
  • 54
  • 50
  • 49
  • 46
  • 40
  • 39
  • 38
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Approche modulaire pour le suivi temps réel de cibles multi-capteurs pour les applications routières / Modular and real time multi sensors multi target tracking system for ITS purpose

Lamard, Laetitia 10 July 2014 (has links)
Cette thèse, réalisée en coopération avec l'Institut Pascal et Renault, s'inscrit dans le domaine des applications d'aide à la conduite, la plupart de ces systèmes visant à améliorer la sécurité des passagers du véhicule. La fusion de différents capteurs permet de rendre plus fiable la prise de décision. L'objectif des travaux de cette thèse a été de développer un système de fusion entre un radar et une caméra intelligente pour la détection des obstacles frontaux au véhicule. Nous avons proposé une architecture modulaire de fusion temps réel utilisant des données asynchrones provenant des capteurs sans a priori applicatif. Notre système de fusion de capteurs est basé sur des méthodes de suivi de plusieurs cibles. Des méthodes probabilistes de suivi de cibles ont été envisagées et une méthode particulière, basée sur la modélisation des obstacles par un ensemble fini de variables aléatoires a été choisie et testée en temps réel. Cette méthode, appelée CPHD (Cardinalized Probability Hypothesis Density) permet de gérer les différents défauts des capteurs (non détections, fausses alarmes, imprécision de positions et de vitesses mesurées) et les incertitudes liées à l’environnement (nombre inconnu d'obstacles à détecter). Ce système a été amélioré par la gestion de différents types d'obstacles : piéton, voiture, camion, vélo. Nous avons proposé aussi une méthode permettant de résoudre le problème des occultations avec une caméra de manière explicite par une méthode probabiliste en prenant en compte les imprécisions de ce capteur. L'utilisation de capteurs intelligents a introduit un problème de corrélation des mesures (dues à un prétraitement des données) que nous avons réussi à gérer grâce à une analyse de l'estimation des performances de détection de ces capteurs. Afin de compléter ce système de fusion, nous avons mis en place un outil permettant de déterminer rapidement les paramètres de fusion à utiliser pour les différents capteurs. Notre système a été testé en situation réelle lors de nombreuses expérimentations. Nous avons ainsi validé chacune des contributions de manière qualitative et quantitative. / This PhD work, carried out in collaboration with Institut Pascal and Renault, is in the field of the Advanced Driving Assisted Systems, most of these systems aiming to improve passenger security. Sensors fusion makes the system decision more reliable. The goal of this PhD work was to develop a fusion system between a radar and a smart camera, improving obstacles detection in front of the vehicle. Our approach proposes a real-time flexible fusion architecture system using asynchronous data from the sensors without any prior knowledge about the application. Our fusion system is based on a multi targets tracking method. Probabilistic multi target tracking was considered, and one based on random finite sets (modelling targets) was selected and tested in real-time computation. The filter, named CPHD (Cardinalized Probability Hypothesis Density), succeed in taking into account and correcting all sensor defaults (non detections, false alarms and imprecision on position and speed estimated by sensors) and uncertainty about the environment (unknown number of targets). This system was improved by introducing the management of the type of the target: pedestrian, car, truck and bicycle. A new system was proposed, solving explicitly camera occlusions issues by a probabilistic method taking into account this sensor imprecision. Smart sensors use induces data correlation (due to pre-processed data). This issue was solved by correcting the estimation of sensor detection performance. A new tool was set up to complete fusion system: it allows the estimation of all sensors parameters used by fusion filter. Our system was tested in real situations with several experimentations. Every contribution was qualitatively and quantitatively validated.
182

Navegação terrestre usando unidade de medição inercial de baixo desempenho e fusão sensorial com filtro de Kalman adaptativo suavizado. / Terrestrial navigation using low-grade inertial measurement unit and sensor fusion with smoothed adaptive Kalman filter.

Santana, Douglas Daniel Sampaio 01 June 2011 (has links)
Apresenta-se o desenvolvimento de modelos matemáticos e algoritmos de fusão sensorial para navegação terrestre usando uma unidade de medição inercial (UMI) de baixo desempenho e o Filtro Estendido de Kalman. Os modelos foram desenvolvidos com base nos sistemas de navegação inercial strapdown (SNIS). O termo baixo desempenho refere-se à UMIs que por si só não são capazes de efetuar o auto- alinhamento por girocompassing. A incapacidade de se navegar utilizando apenas uma UMI de baixo desempenho motiva a investigação de técnicas que permitam aumentar o grau de precisão do SNIS com a utilização de sensores adicionais. Esta tese descreve o desenvolvimento do modelo completo de uma fusão sensorial para a navegação inercial de um veículo terrestre usando uma UMI de baixo desempenho, um hodômetro e uma bússola eletrônica. Marcas topográficas (landmarks) foram instaladas ao longo da trajetória de teste para se medir o erro da estimativa de posição nesses pontos. Apresenta-se o desenvolvimento do Filtro de Kalman Adaptativo Suavizado (FKAS), que estima conjuntamente os estados e o erro dos estados estimados do sistema de fusão sensorial. Descreve-se um critério quantitativo que emprega as incertezas de posição estimadas pelo FKAS para se determinar a priori, dado os sensores disponíveis, o intervalo de tempo máximo que se pode navegar dentro de uma margem de confiabilidade desejada. Conjuntos reduzidos de landmarks são utilizados como sensores fictícios para testar o critério de confiabilidade proposto. Destacam-se ainda os modelos matemáticos aplicados à navegação terrestre, unificados neste trabalho. Os resultados obtidos mostram que, contando somente com os sensores inerciais de baixo desempenho, a navegação terrestre torna-se inviável após algumas dezenas de segundos. Usando os mesmos sensores inerciais, a fusão sensorial produziu resultados muito superiores, permitindo reconstruir trajetórias com deslocamentos da ordem de 2,7 km (ou 15 minutos) com erro final de estimativa de posição da ordem de 3 m. / This work presents the development of the mathematical models and the algorithms of a sensor fusion system for terrestrial navigation using a low-grade inertial measurement unit (IMU) and the Extended Kalman Filter. The models were developed on the basis of the strapdown inertial navigation systems (SINS). Low-grade designates an IMU that is not able to perform girocompassing self-alignment. The impossibility of navigating relying on a low performance IMU is the motivation for investigating techniques to improve the SINS accuracy with the use of additional sensors. This thesis describes the development of a comprehensive model of a sensor fusion for the inertial navigation of a ground vehicle using a low-grade IMU, an odometer and an electronic compass. Landmarks were placed along the test trajectory in order to allow the measurement of the error of the position estimation at these points. It is presented the development of the Smoothed Adaptive Kalman Filter (SAKF), which jointly estimates the states and the errors of the estimated states of the sensor fusion system. It is presented a quantitative criteria which employs the position uncertainties estimated by SAKF in order to determine - given the available sensors, the maximum time interval that one can navigate within a desired reliability. Reduced sets of landmarks are used as fictitious sensors to test the proposed reliability criterion. Also noteworthy are the mathematical models applied to terrestrial navigation that were unified in this work. The results show that, only relying on the low performance inertial sensors, the terrestrial navigation becomes impracticable after few tens of seconds. Using the same inertial sensors, the sensor fusion produced far better results, allowing the reconstruction of trajectories with displacements of about 2.7 km (or 15 minutes) with a final error of position estimation of about 3 m.
183

Modelo para a classificação da qualidade da água contaminada por solo usando indução por árvore de decisão. / Classification model for soil-contaminated water quality using decision tree induction.

Dota, Mara Andréa 12 September 2014 (has links)
A possibilidade de avaliar remotamente e de forma instantânea alterações na qualidade das águas em função da entrada de solos permite o monitoramento de processos ecológicos como o assoreamento, perdas e solos, carreamento de pesticidas e degradação de habitats aquáticos. Com a utilização de um modelo automatizado, torna-se possível um monitoramento em tempo real remoto coletando dados por meio de Redes de Sensores Sem Fio. Esta pesquisa propõe um modelo de classificação da qualidade da água contaminada por solo usando técnicas de Árvore de Decisão. Com este modelo torna-se possível acompanhar alterações que venham a ocorrer em águas superficiais indicando o nível de contaminação por solo com maior rapidez do que a forma convencional que necessita de análise em laboratório e coleta de amostra manual. A classificação proposta considera sete classes de qualidade da água, conforme dados de um experimento conduzido em laboratório. Foram utilizadas técnicas de Inteligência Artificial com o intuito de realizar a Fusão de Sensores para avaliar, em tempo real, as leituras dos sensores, indicando a qual classe de qualidade a amostra se enquadra. Na verificação de quantas classes seria o ideal, utilizou-se o algoritmo k-means++. Para a construção do modelo de classificação foram usadas técnicas de Indução por Árvore de Decisão, tais como: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Os testes realizados indicam que a classificação proposta é coerente, visto que os diferentes algoritmos comprovaram uma relação estatística forte entre as instâncias das classes, garantindo que o modelo proposto irá predizer saídas para entradas de dados desconhecidas com acurácia. Os algoritmos com melhores resultados foram FT, J48graft e J48. / The possibility to remotely and instantaneously evaluate changes in water quality due to soil contamination allows monitoring ecological processes such as siltation, soil losses, loading of pesticides and degradation of aquatic habitats. Using an automated model to classify soil-contaminated water quality allows for a remote realtime monitoring by collecting data using Wireless Sensor Networks. This study proposes a model to classify soil-contaminated water quality by using Decision Tree techniques. With this model, it is possible to track changes that may occur in surface waters indicating the level of contamination by soil faster than the conventional way, which requires laboratory analysis and manual sampling. The classification proposed considers seven classes of water quality, according to data from an experiment carried out in laboratory. Artificial Intelligence techniques were used in order to implement Sensor Fusion to evaluate, in real time, sensor readings to which class the sample quality fits. By checking how many classes would be ideal, the k-means + + algorithm was used. To build the classification model, Decision Tree Induction techniques were used, such as: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Tests indicated that the proposed classification is consistent because different algorithms results confirmed a strong statistical relationship between instances of classes, ensuring that this model will predict outputs to unknown inputs accurately. The algorithms with best results were FT, J48graft and J48.
184

Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos / Obstacle detection using 3D perception and radar data fusion in automotive vehicles

Rosero, Luis Alberto Rosero 30 January 2017 (has links)
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores. / This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
185

Monitoramento de operações de retificação usando fusão de sensores / Monitoring of operations the rectification using sensors of fusion

Schühli, Luciano Alcindo 02 August 2007 (has links)
O presente trabalho trata da análise experimental de um sistema de monitoramento baseado na técnica de fusão de sensores, aplicado em uma retificadora cilíndrica externa. A fusão é realizada entre os sinais de potência e emissão acústica para obtenção do parâmetro FAP (Fast Abrasive Power) através do método desenvolvido por Valente (2003). Através da simulação de problemas encontrados nos processos de retificação (falha de sobremetal, colisão, desbalanceamento e vibração), foram captados os sinais de potência e emissão acústica e a partir destes gerado o parâmetro FAP, comparando seu desempenho, na detecção dos problemas, com os outros dois sinais. Para a análise foram construídos os gráficos das variações dos sinais em relação ao tempo de execução do processo e os mapas do FAP e acústico. O sistema de monitoramento avaliado tem como característica baixa complexidade de instalação e execução. Os dados experimentais revelam que o FAP apresenta uma velocidade de resposta maior que a potência e levemente amortecida em relação à emissão acústica. O nível do seu sinal é igual ao da potência mantendo-se homogêneo durante o processo, ao contrário da emissão acústica que pode ser influenciada por diversos outros parâmetros, tais como geometria da peça, distância do sensor, montagem do sensor, entre outros, que independem da interação ferramenta-peça. O resultado é uma resposta dinâmica e confiável, associada à energia do sistema. Estas características são interessantes para o monitoramento de processos de retificação (excluindo a dressagem) sendo superiores àquelas apresentadas isoladamente pelos sinais de potência e emissão acústica. / The present study deals with an experimental analysis of a monitoring system based on a sensor fusion strategy applied to a cylindrical grinding machine. It comprises a fusion of the power and acoustic emission signals and has as main goal to obtain the FAP (Fast Abrasive Power) using the method developed by Valente (2003). Initially, the power and acoustic emission signals were captured under operational dysfunction conditions during the grinding process (stock imperfection, collision, unbalancing e vibration). Then, based on these signals, the FAP parameter was generated and its capability in characterizing operational dysfunctions evaluated against the performance of an individual analysis of the power and acoustic emission signals. For this analysis, FAP and acoustic maps plus plots showing the FAP signals vs. working time were implemented. The experimental data revealed that the FAP presents a faster response than the power signal and a slightly dumped response when compared against the acoustic signal. The signal level of the FAP is similar to the power signal and is homogenous during the machining process. On contrary to the FAP, the acoustic emission signal may be affected by parameters that are not related to the tool-workpiece interactions, workpiece geometry and sensor positioning. The dynamic response of FAP is reliable and linked to the energy of the system. Finally, it should be highlightened that the monitoring system based on the FAP parameter presents low complexity in both implementation and execution. Such characteristics are superior to those observed when using either the power or acoustic emission signals and highly valuable in a system designed to monitor grinding processes.
186

Controle de posição com múltiplos sensores em um robô colaborativo utilizando liquid state machines

Sala, Davi Alberto January 2017 (has links)
A ideia de usar redes neurais biologicamente inspiradas na computação tem sido amplamente utilizada nas últimas décadas. O fato essencial neste paradigma é que um neurônio pode integrar e processar informações, e esta informação pode ser revelada por sua atividade de pulsos. Ao descrever a dinâmica de um único neurônio usando um modelo matemático, uma rede pode ser implementada utilizando um conjunto desses neurônios, onde a atividade pulsante de cada neurônio irá conter contribuições, ou informações, da atividade pulsante da rede em que está inserido. Neste trabalho é apresentado um controlador de posição no eixo Z utilizando fusão de sensores baseado no paradigma de Redes Neurais Recorrentes. O sistema proposto utiliza uma Máquina de Estado Líquido (LSM) para controlar o robô colaborativo BAXTER. O framework foi projetado para trabalhar em paralelo com as LSMs que executam trajetórias em formas fechadas de duas dimensões, com o objetivo de manter uma caneta de feltro em contato com a superfície de desenho, dados de sensores de força e distância são alimentados ao controlador. O sistema foi treinado utilizando dados de um controlador Proporcional Integral Derivativo (PID), fundindo dados de ambos sensores. Resultados mostram que a LSM foi capaz de aprender o comportamento do controlador PID em diferentes situações. / The idea of employing biologically inspired neural networks to perform computation has been widely used over the last decades. The essential fact in this paradigm is that a neuron can integrate and process information, and this information can be revealed by its spiking activity. By describing the dynamics of a single neuron using a mathematical model, a network in which the spiking activity of every single neuron will get contributions, or information, from the spiking activity of the embedded network. A positioning controller based on Spiking Neural Networks for sensor fusion suitable to run on a neuromorphic computer is presented in this work. The proposed framework uses the paradigm of reservoir computing to control the collaborative robot BAXTER. The system was designed to work in parallel with Liquid State Machines that performs trajectories in 2D closed shapes. In order to keep a felt pen touching a drawing surface, data from sensors of force and distance are fed to the controller. The system was trained using data from a Proportional Integral Derivative controller, merging the data from both sensors. The results show that the LSM can learn the behavior of a PID controller on di erent situations.
187

Modelo para a classificação da qualidade da água contaminada por solo usando indução por árvore de decisão. / Classification model for soil-contaminated water quality using decision tree induction.

Mara Andréa Dota 12 September 2014 (has links)
A possibilidade de avaliar remotamente e de forma instantânea alterações na qualidade das águas em função da entrada de solos permite o monitoramento de processos ecológicos como o assoreamento, perdas e solos, carreamento de pesticidas e degradação de habitats aquáticos. Com a utilização de um modelo automatizado, torna-se possível um monitoramento em tempo real remoto coletando dados por meio de Redes de Sensores Sem Fio. Esta pesquisa propõe um modelo de classificação da qualidade da água contaminada por solo usando técnicas de Árvore de Decisão. Com este modelo torna-se possível acompanhar alterações que venham a ocorrer em águas superficiais indicando o nível de contaminação por solo com maior rapidez do que a forma convencional que necessita de análise em laboratório e coleta de amostra manual. A classificação proposta considera sete classes de qualidade da água, conforme dados de um experimento conduzido em laboratório. Foram utilizadas técnicas de Inteligência Artificial com o intuito de realizar a Fusão de Sensores para avaliar, em tempo real, as leituras dos sensores, indicando a qual classe de qualidade a amostra se enquadra. Na verificação de quantas classes seria o ideal, utilizou-se o algoritmo k-means++. Para a construção do modelo de classificação foram usadas técnicas de Indução por Árvore de Decisão, tais como: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Os testes realizados indicam que a classificação proposta é coerente, visto que os diferentes algoritmos comprovaram uma relação estatística forte entre as instâncias das classes, garantindo que o modelo proposto irá predizer saídas para entradas de dados desconhecidas com acurácia. Os algoritmos com melhores resultados foram FT, J48graft e J48. / The possibility to remotely and instantaneously evaluate changes in water quality due to soil contamination allows monitoring ecological processes such as siltation, soil losses, loading of pesticides and degradation of aquatic habitats. Using an automated model to classify soil-contaminated water quality allows for a remote realtime monitoring by collecting data using Wireless Sensor Networks. This study proposes a model to classify soil-contaminated water quality by using Decision Tree techniques. With this model, it is possible to track changes that may occur in surface waters indicating the level of contamination by soil faster than the conventional way, which requires laboratory analysis and manual sampling. The classification proposed considers seven classes of water quality, according to data from an experiment carried out in laboratory. Artificial Intelligence techniques were used in order to implement Sensor Fusion to evaluate, in real time, sensor readings to which class the sample quality fits. By checking how many classes would be ideal, the k-means + + algorithm was used. To build the classification model, Decision Tree Induction techniques were used, such as: Best-First Decision Tree Classifier BFTree, Functional Trees FT, Naïve Bayes Decision Tree NBTree, Grafted C4.5 Decision Tree J48graft, C4.5 Decision Tree J48, LADTree. Tests indicated that the proposed classification is consistent because different algorithms results confirmed a strong statistical relationship between instances of classes, ensuring that this model will predict outputs to unknown inputs accurately. The algorithms with best results were FT, J48graft and J48.
188

Multi-modal recognition of manipulation activities through visual accelerometer tracking, relational histograms, and user-adaptation

Stein, Sebastian January 2014 (has links)
Activity recognition research in computer vision and pervasive computing has made a remarkable trajectory from distinguishing full-body motion patterns to recognizing complex activities. Manipulation activities as occurring in food preparation are particularly challenging to recognize, as they involve many different objects, non-unique task orders and are subject to personal idiosyncrasies. Video data and data from embedded accelerometers provide complementary information, which motivates an investigation of effective methods for fusing these sensor modalities. This thesis proposes a method for multi-modal recognition of manipulation activities that combines accelerometer data and video at multiple stages of the recognition pipeline. A method for accelerometer tracking is introduced that provides for each accelerometer-equipped object a location estimate in the camera view by identifying a point trajectory that matches well the accelerometer data. It is argued that associating accelerometer data with locations in the video provides a key link for modelling interactions between accelerometer-equipped objects and other visual entities in the scene. Estimates of accelerometer locations and their visual displacements are used to extract two new types of features: (i) Reference Tracklet Statistics characterizes statistical properties of an accelerometer's visual trajectory, and (ii) RETLETS, a feature representation that encodes relative motion, uses an accelerometer's visual trajectory as a reference frame for dense tracklets. In comparison to a traditional sensor fusion approach where features are extracted from each sensor-type independently and concatenated for classification, it is shown that combining RETLETS and Reference Tracklet Statistics with those sensor-specific features performs considerably better. Specifically addressing scenarios in which a recognition system would be primarily used by a single person (e.g., cognitive situational support), this thesis investigates three methods for adapting activity models to a target user based on user-specific training data. Via randomized control trials it is shown that these methods indeed learn user idiosyncrasies. All proposed methods are evaluated on two new challenging datasets of food preparation activities that have been made publicly available. Both datasets feature a novel combination of video and accelerometers attached to objects. The Accelerometer Localization dataset is the first publicly available dataset that enables quantitative evaluation of accelerometer tracking algorithms. The 50 Salads dataset contains 50 sequences of people preparing mixed salads with detailed activity annotations.
189

Accès à de l'information en mobilité par l'image pour la visite de Musées : Réseaux profonds pour l'identification de gestes et d'objets / Information Access in mobile environment for museum visits : Deep Neraul Networks for Instance and Gesture Recognition

Portaz, Maxime 24 October 2018 (has links)
Dans le cadre du projet GUIMUTEIC, qui vise à équiper les visiteurs de musées d'un outils d'aide à la visite équipé d'une caméra, cette thèse adresse le problème d'accès à l'information en mobilité.On s'intéresse à comment rendre l'information à propos des œuvres accessible et automatique aux visiteurs de lieux touristiques.Elle s'inscrit dans le cadre du projet GUIMUTEIC, qui vise à équiper les visiteurs de musées d'un outil d'aide à l'accès à l'information en mobilité.Être capable de déterminer si le visiteur désire avoir accès à l'information signifie identifier le contexte autour de lui, afin de fournir une réponse adaptée, et réagir à ses actions.Ceci soulève les problématiques d'identification de points d'intérêts, pour déterminer le contexte, et d'identification de gestes de utilisateurs, pour répondre à ses demandes.Dans le cadre du notre projet, le visiteur est donc équipé d'une caméra embarquée.L'objectif est de fournir un solution à l'aide à la visite, en developpant des méthodes de vision pour l'identification d'objet, et de detection de gestes dans les vidéos à la première personne.Nous proposons dans cette thèse une étude de la faisabilité et de l'intérêt de l'aide à la visite, ainsi que de la pertinence des gestes dans le cadre de l'interaction avec un système embarqué.Nous proposons une nouvelle approche pour l'identification d'objets grâce à des réseaux de neurones profonds siamois pour l'apprentissage de similarité entre les images, avec apprentissage des régions d'intérêt dans l'image.Nous explorons également l'utilisation de réseaux à taille réduite pour le détection de gestes en mobilité.Nous présentons pour cela une architecture utilisant un nouveau type de bloc de convolutions, pour réduire le nombre de paramètres du réseau et permettre son utilisation sur processeur mobile.Pour évaluer nos propositions, nous nous appuyons sur plusieurs corpus de recherche d'image et de gestes, crée spécialement pour correspondre aux contraintes du projet. / This thesis is part of the GUIMUTEIC project, which aim is to equip museum tourist with an audio-guide enhanced by a camera.This thesis adress the problem of information access in mobile environment, by automaticaly providing information about museum artefacts.To be able to give this information, we need to know when the visitor desire guidance, and what he is looking at, to give the correct response.This raises issues of identification of points of interest, to determine the context, and identification of user gestures, to meet his demands.As part of our project, the visitor is equipped with an embedded camera.The goal is to provide a solution to help with the visit, developing vision methods for object identification, and gesture detection in first-person videos.We propose in this thesis a study of the feasibility and the interest of the assistance to the visit, as well as the relevance of the gestures in the context of the interaction with an embedded system.We propose a new approach for objects identification thanks to siamese neural networks to learn images similarity and define regions of interest.We are also exploring the use of small networks for gesture recognition in mobility.We present for this an architecture using new types of convolution blocks, to reduce the number of parameters of the network and allow its use on mobile processor.To evaluate our proposals, we rely on several corpus of image search and gestures, specificaly designed to match the constraints of the project.
190

The Application of Index Based, Region Segmentation, and Deep Learning Approaches to Sensor Fusion for Vegetation Detection

Stone, David L. 01 January 2019 (has links)
This thesis investigates the application of index based, region segmentation, and deep learning methods to the sensor fusion of omnidirectional (O-D) Infrared (IR) sensors, Kinnect sensors, and O-D vision sensors to increase the level of intelligent perception for unmanned robotic platforms. The goals of this work is first to provide a more robust calibration approach and improve the calibration of low resolution and noisy IR O-D cameras. Then our goal was to explore the best approach to sensor fusion for vegetation detection. We looked at index based, region segmentation, and deep learning methods and compared them with a goal of significant reduction in false positives while maintaining reasonable vegetation detection. The results are as follows: Direct Spherical Calibration of the IR camera provided a more consistent and robust calibration board capture and resulted in the best overall calibration results with sub-pixel accuracy The best approach for sensor fusion for vegetation detection was the deep learning approach, the three methods are detailed in the following chapters with the results summarized here. Modified Normalized Difference Vegetation Index approach achieved 86.74% recognition and 32.5% false positive, with peaks to 80% Thermal Region Fusion (TRF) achieved a lower recognition rate at 75.16% but reduced false positives to 11.75% (a 64% reduction) Our Deep Learning Fusion Network (DeepFuseNet) results demonstrated that deep learning approach showed the best results with a significant (92%) reduction in false positives when compared to our modified normalized difference vegetation index approach. The recognition was 95.6% with 2% false positive. Current approaches are primarily focused on O-D color vision for localization, mapping, and tracking and do not adequately address the application of these sensors to vegetation detection. We will demonstrate the contradiction between current approaches and our deep sensor fusion (DeepFuseNet) for vegetation detection. The combination of O-D IR and O-D color vision coupled with deep learning for the extraction of vegetation material type, has great potential for robot perception. This thesis will look at two architectures: 1) the application of Autoencoders Feature Extractors feeding a deep Convolution Neural Network (CNN) fusion network (DeepFuseNet), and 2) Bottleneck CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors. We show that the vegetation recognition rate and the number of false detects inherent in the classical indices based spectral decomposition are greatly improved using our DeepFuseNet architecture. We first investigate the calibration of omnidirectional infrared (IR) camera for intelligent perception applications. The low resolution omnidirectional (O-D) IR image edge boundaries are not as sharp as with color vision cameras, and as a result, the standard calibration methods were harder to use and less accurate with the low definition of the omnidirectional IR camera. In order to more fully address omnidirectional IR camera calibration, we propose a new calibration grid center coordinates control point discovery methodology and a Direct Spherical Calibration (DSC) approach for a more robust and accurate method of calibration. DSC addresses the limitations of the existing methods by using the spherical coordinates of the centroid of the calibration board to directly triangulate the location of the camera center and iteratively solve for the camera parameters. We compare DSC to three Baseline visual calibration methodologies and augment them with additional output of the spherical results for comparison. We also look at the optimum number of calibration boards using an evolutionary algorithm and Pareto optimization to find the best method and combination of accuracy, methodology and number of calibration boards. The benefits of DSC are more efficient calibration board geometry selection, and better accuracy than the three Baseline visual calibration methodologies. In the context of vegetation detection, the fusion of omnidirectional (O-D) Infrared (IR) and color vision sensors may increase the level of vegetation perception for unmanned robotic platforms. A literature search found no significant research in our area of interest. The fusion of O-D IR and O-D color vision sensors for the extraction of feature material type has not been adequately addressed. We will look at augmenting indices based spectral decomposition with IR region based spectral decomposition to address the number of false detects inherent in indices based spectral decomposition alone. Our work shows that the fusion of the Normalized Difference Vegetation Index (NDVI) from the O-D color camera fused with the IR thresholded signature region associated with the vegetation region, minimizes the number of false detects seen with NDVI alone. The contribution of this work is the demonstration of two new techniques, Thresholded Region Fusion (TRF) technique for the fusion of O-D IR and O-D Color. We also look at the Kinect vision sensor fused with the O-D IR camera. Our experimental validation demonstrates a 64% reduction in false detects in our method compared to classical indices based detection. We finally compare our DeepFuseNet results with our previous work with Normalized Difference Vegetation index (NDVI) and IR region based spectral fusion. This current work shows that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet deep learning approach out performs the previous NVDI fused with far infrared region segmentation. Our experimental validation demonstrates an 92% reduction in false detects in our method compared to classical indices based detection. This work contributes a new technique for the fusion of O-D vision and O-D IR sensors using two deep CNN feature extractors feeding into a fully connected CNN Network (DeepFuseNet).

Page generated in 0.0582 seconds