• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 13
  • 3
  • 1
  • Tagged with
  • 33
  • 33
  • 14
  • 11
  • 10
  • 9
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem

Alves, Thiago Waszak January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.
22

Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem

Alves, Thiago Waszak January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.
23

Sistema de identificação de superfícies navegáveis baseado em visão computacional e redes neurais artificiais / Navigable surfaces identification system based on computer vision and artificial neural networks

Patrick Yuri Shinzato 22 November 2010 (has links)
A navegação autônoma é um dos problemas fundamentais da robótica móvel. Para um robô executar esta tarefa, é necessário determinar a região segura para a navegação. Este trabalho propõe um sistema de identificação de superfícies navegáveis baseado em visão computacional utilizando redes neurais artificiais. Mais especificamente, é realizado um estudo sobre a utilização de diferentes atributos de imagem, como descritores estatísticos e elementos de espaços de cores, para serem utilizados como entrada das redes neurais artificiais que tem como tarefa a identificação de superfícies navegáveis. O sistema desenvolvido utiliza resultados de classificação de múltiplas configurações de redes neurais artificiais, onde a principal diferença entre elas é o conjunto de atributos de imagem utilizados como entrada. Essa combinação de diversas classificações foi realizada visando maior robustez e melhor desempenho na identificação de vias em diferentes cenários / Autonomous navigation is a fundamental problem in mobile robotics. In order to perform this task, a robot must identify the areas where it can navigate safely. This dissertation proposes a navigable terrain identification system based on computer vision and neural networks. More specifically, it is presented a study of image attributes, such as statistical decriptors and elements of different color spaces, that are used as neural neworks inputs for the navigable surfaces identification. The system developed combines the classification results of multiple neural networks topologies with different image attributes. This combination of classification results allows for improved efficient and robustenes in different scenarios
24

Multi-modal, Multi-Domain Pedestrian Detection and Classification : Proposals and Explorations in Visible over StereoVision, FIR and SWIR / Détection et classification de piétons multi-modale, multi-domaine : propositions et explorations dans visible sur stéréo vision, infrarouge lointain et infrarouge à ondes courtes

Miron, Alina Dana 16 July 2014 (has links)
L’intérêt principal des systèmes d’aide à la conduite (ADAS) est d’accroître la sécurité de tous les usagers de la route. Le domaine du véhicule intelligent porte une attention particulière au piéton,l’une des catégories la plus vulnérable. Bien que ce sujet ait été étudié pendant près de cinquante ans par des chercheurs, une solution parfaite n’existe pas encore. Nous avons exploré dans ce travail de thèse différents aspects de la détection et la classification du piéton. Plusieurs domaines du spectre (Visible, Infrarouge proche, Infrarouge lointain et stéréovision) ont été explorés et comparés.Parmi la multitude des systèmes imageurs existants, les capteurs infrarouge lointain (FIR),capables de capturer la température des différents objets, reste particulièrement intéressants pour la détection de piétons. Les piétons ont, le plus souvent, une température plus élevée que les autres objets. En raison du manque d’accessibilité publique aux bases de données d’images thermiques, nous avons acquis et annoté une base de donnée, nommé RIFIR, contenant à la fois des images dans le visible et dans l’infrarouge lointain. Cette base nous a permis de comparer les performances de plusieurs attributs présentés dans l’état de l’art dans les deux domaines.Nous avons proposé une méthode générant de nouvelles caractéristiques adaptées aux images FIR appelées « Intensity Self Similarity (ISS) ». Cette nouvelle représentation est basée sur la similarité relative des intensités entre différents sous-blocks dans la région d’intérêt contenant le piéton.Appliquée sur différentes bases de données, cette méthode a montré que, d’une manière générale,le spectre infrarouge donne de meilleures performances que le domaine du visible. Néanmoins, la fusion des deux domaines semble beaucoup plus intéressante.La deuxième modalité d’image à laquelle nous nous sommes intéressé est l’infrarouge très proche (SWIR, Short Wave InfraRed). Contrairement aux caméras FIR, les caméras SWIR sont capables de recevoir le signal même à travers le pare-brise d’un véhicule. Ce qui permet de les embarquer dans l’habitacle du véhicule. De plus, les imageurs SWIR ont la capacité de capturer une scène même à distance lointaine. Ce qui les rend plus appropriées aux applications liées au véhicule intelligent. Dans le cadre de cette thèse, nous avons acquis et annoté une base de données, nommé RISWIR, contenant des images dans le visible et dans le SWIR. Cette base a permis une comparaison entre différents algorithmes de détection et de classification de piétons et entre le visible et le SWIR. Nos expérimentations ont montré que les systèmes SWIR sont prometteurs pour les ADAS. Les performances de ces systèmes semblent meilleures que celles du domaine du visible.Malgré les performances des domaines FIR et SWIR, le domaine du visible reste le plus utilisé grâce à son bas coût. Les systèmes imageurs monoculaires classiques ont des difficultés à produire une détection et classification de piétons en temps réel. Pour cela, nous avons l’information profondeur (carte de disparité) obtenue par stéréovision afin de réduire l’espace d’hypothèses dans l’étape de classification. Par conséquent, une carte de disparité relativement correcte est indispensable pour mieux localiser le piéton. Dans ce contexte, une multitude de fonctions coût ont été proposées, robustes aux distorsions radiométriques, pour le calcul de la carte de disparité.La qualité de la carte de disparité, importante pour l’étape de classification, a été affinée par un post traitement approprié aux scènes routières.Les performances de différentes caractéristiques calculées pour différentes modalités (Intensité,profondeur, flot optique) et domaines (Visible et FIR) ont été étudiées. Les résultats ont montré que les systèmes les plus robustes sont ceux qui prennent en considération les trois modalités,plus particulièrement aux occultations. / The main purpose of constructing Intelligent Vehicles is to increase the safety for all traffic participants. The detection of pedestrians, as one of the most vulnerable category of road users, is paramount for any Advance Driver Assistance System (ADAS). Although this topic has been studied for almost fifty years, a perfect solution does not exist yet. This thesis focuses on several aspects regarding pedestrian classification and detection, and has the objective of exploring and comparing multiple light spectrums (Visible, ShortWave Infrared, Far Infrared) and modalities (Intensity, Depth by Stereo Vision, Motion).From the variety of images, the Far Infrared cameras (FIR), capable of measuring the temperature of the scene, are particular interesting for detecting pedestrians. These will usually have higher temperature than the surroundings. Due to the lack of suitable public datasets containing Thermal images, we have acquired and annotated a database, that we will name RIFIR, containing both Visible and Far-Infrared Images. This dataset has allowed us to compare the performance of different state of the art features in the two domains. Moreover, we have proposed a new feature adapted for FIR images, called Intensity Self Similarity (ISS). The ISS representation is based on the relative intensity similarity between different sub-blocks within a pedestrian region of interest. The experiments performed on different image sequences have showed that, in general, FIR spectrum has a better performance than the Visible domain. Nevertheless, the fusion of the two domains provides the best results. The second domain that we have studied is the Short Wave Infrared (SWIR), a light spectrum that was never used before for the task of pedestrian classification and detection. Unlike FIRcameras, SWIR cameras can image through the windshield, and thus be mounted in the vehicle’s cabin. In addition, SWIR imagers can have the ability to see clear at long distances, making it suitable for vehicle applications. We have acquired and annotated a database, that we will name RISWIR, containing both Visible and SWIR images. This dataset has allowed us to compare the performance of different pedestrian classification algorithms, along with a comparison between Visible and SWIR. Our tests have showed that SWIR might be promising for ADAS applications,performing better than the Visible domain on the considered dataset. Even if FIR and SWIR have provided promising results, Visible domain is still widely used due to the low cost of the cameras. The classical monocular imagers used for object detectionand classification can lead to a computational time well beyond real-time. Stereo Vision providesa way of reducing the hypothesis search space through the use of depth information contained in the disparity map. Therefore, a robust disparity map is essential in order to have good hypothesis over the location of pedestrians. In this context, in order to compute the disparity map, we haveproposed different cost functions robust to radiometric distortions. Moreover, we have showed that some simple post-processing techniques can have a great impact over the quality of the obtained depth images.The use of the disparity map is not strictly limited to the generation of hypothesis, and couldbe used for some feature computation by providing complementary information to color images.We have studied and compared the performance of features computed from different modalities(Intensity, Depth and Flow) and in two domains (Visible and FIR). The results have showed that the most robust systems are the ones that take into consideration all three modalities, especially when dealing with occlusions.
25

Sistema ADAS para identificação de distrações e perturbações do motorista na condução de veículos / ADAS system for recognition of driver\'s distractions and disturbances while driving

Berri, Rafael Alceste 31 January 2019 (has links)
Este trabalho apresenta um sistema que se utiliza de características extraídas de dados provenientes de um sensor Kinect v2 para monitorar o motorista, dados de sensores inerciais, da telemetria do veículo e dados sobre a estrada/faixa de rodagem para reconhecer o estilo de direção, permitindo ao sistema detectar o uso do celular no trânsito, um motorista embriagado e a direção sonolenta, evitando assim, riscos relacionados com a direção. De fato, quando veículos são conduzidos por pessoas em ligações telefônicas, o risco de acidente aumenta de 4 a 6 vezes. Motoristas embriagados causaram 10:497 mortes nas rodovias dos Estados Unidos da América em 2016, segundo o órgão local responsável pela segurança no trânsito (NHTSA). Um Conjunto de Dados Naturalista do Comportamento do Motorista (NDBD) foi criado especificamente para este trabalho e utilizado para o teste e validação do sistema proposto. A solução proposta emprega duas análises dos dados do motorista, os subsistemas de reconhecimento de padrões de Curto e Longo prazos. Assim, pode-se detectar situações de risco na direção. O sistema possui 3 níveis de alerta: sem alerta, alerta baixo e alerta alto. O subsistema de Curto Prazo detecta situações de sem alerta e de algum nível de alerta. Já o subsistema de Longo Prazo é responsável por determinar o nível de alerta: baixo ou alto. Classificadores baseados em Aprendizado de Máquina e Redes Neurais Artificiais (RNA) foram utilizados. Um Algoritmo Genético foi empregado para otimizar e selecionar um conjunto de valores que ajustam a entrada de características, função de ativação dos neurônios e topologia/treino da rede neural. O sistema proposto alcançou 79;5% de acurácia nos frames do NDBD (conjunto de treinamento e validação obtidos utilizando um simulador veicular próprio), para a detecção conjunta de risco em situações de uso de celular, embriaguez ou condução normal. Para o classificador de Curto Prazo, utilizou-se períodos de 5 frames e uma janela de 140 frames para o Longo Prazo. Considerando a detecção individualizada dos problemas de condução, no caso específico da embriaguez (usados dados de embriaguez e direção normal) o sistema obteve 98% de acurácia, e especificamente para o uso de celular obteve 95% de acurácia. Na classificação de sem alerta (situações sem risco), o sistema obteve apenas 1;5% de predições erradas (falsos positivos), contribuindo assim para o conforto do motorista ao utilizar o sistema. / In this work, a system has been developed using features from a frontal Kinect v2 sensor to monitor the driver, from inertial sensors, car telemetry, and road lane data to recognize the driving style, enabling to recognize the use of a cell phone while driving, a drunk driver, and drowsy driving, avoiding driving risks. In fact, cars driven by people on phone calls, increases the risk of crash between 4 and 6 times. Drunk drivers caused 10;497 deaths on USA roads in 2016 according to NHTSA. The Naturalistic Driver Behavior Dataset (NDBD) was created specifically for this work and it was used to test the proposed system. The proposed solution uses two analysis of the drivers data, the Short-Term and Long-Term pattern recognition subsystems, thus it could detect the risk situations while driving. The system has 3 levels of alarm: no alarm, lowest alarm, and highest alarm. Short-Term detects between no alarm or some level alarm. Long-Term is responsible for determining the risk alarm level, low or high. The classifiers are based on Machine Learning and Artificial Neural Networks (ANN), furthermore, the values set to adjust input features, neuron activation functions, and network topology/training parameters were optimized and selected using a Genetic Algorithm. The proposed system achieved 79:5% of accuracy in NDBD frames (training and validation sets obtained using a driving simulator), for joint detection of risk in situations of cellphone usage, drunkenness, or normal driving. For the Short-Term classifier, it was used length periods of 5 frames and a window of 140 frames for Long-Term. Considering the individualized detection of driving problems, in the specific case of drunkenness (using data of drunkenness and normal driving), the system achieved 98% of accuracy, and specifically for cell phone usage 95% of accuracy. The best results achieved obtained only 1:5% of no risk situation having a wrong prediction (false positives with alarm activation), contributing to the driver comfort when he/she is using the system.
26

Commande et planification de trajectoires pour la navigation de véhicules autonomes / Control and path planning for navigation of autonomous vehicles

Tagne Fokam, Gilles 18 November 2014 (has links)
Ces travaux de recherche portent sur la commande et la planification de trajectoires pour la navigation de véhicules autonomes. Ils se situent dans le cadre d'un projet très ambitieux lancé par le laboratoire Heudiasyc sur la conduite autonome à grande vitesses (vitesse longitudinale supérieure à 5m/s ~= 18 km/h). Pour proposer des solutions à cette problématique, après avoir réalisé une large recherche bibliographique sur la commande et la planification des trajectoires des véhicules autonomes, plusieurs contributions ont été présentées. En ce qui concerne la commande des véhicules autonomes, un contrôleur latéral par mode glissant d'ordre supérieur a été proposé. Compte tenu de la ressemblance implicite entre le mode glissant et le principe d'immersion et d'invariance (I&I), deux contrôleurs utilisant le principe d'immersion et d'invariance ont été proposés par la suite pour améliorer les performances par rapport au mode glissant. Le développement de ces nouveaux contrôleurs nous a permis de garantir une stabilité robuste pour tous les gains positifs des contrôleurs I&I. Ce résultat nous a conduit à étudier les propriétés intrinsèques du système. Une étude des propriétés de passivité du système a révélé des caractéristiques de passivité intéressantes. Par la suite, nous avons développé un contrôleur robuste basé sur la passivité. Concernant la navigation, nous avons développé deux algorithmes de navigation basés sur la méthode des tentacules. Ceci dans le but d'améliorer la méthode de base. Les résultats de la simulation montrent que les algorithmes donnent de bons résultats vis-à-vis des objectifs attendus d'évitement d'obstacles et de suivi de la trajectoire globale de référence. Les algorithmes de commande et de planification de trajectoires développés ont été validés en simulation hors-ligne avec des données réelles après avoir été testés sur un simulateur réaliste. / My research focuses on trajectory planning and control of autonomous vehicles. This work is a part of an extremely ambitious project launched by the Heudiasyc laboratory about autonomous driving at high speed (longitudinal speed greater to 5m/s ~= 18 km/h). With regard to the control of autonomous vehicles at high speed, a lateral controler using higher-order sliding mode control is proposed. Given the implicit similarity between the sliding mode and the principle of immersion and invariance, two controllers using the principle of immersion and invariance have been subsequently proposed in order to improve the performance with respect to the sliding mode. The development of these new controllers shows very strong robust stability which leads us to study the intrinsic properties of the system. A study of the passivity properties of the system is also crried out, showing some interesting characteristics of the system. Hence, a robust passivity-based controller has been developed. Regarding the navigation, we have developed two navigation algorithms based on the tentacles method. Subsequently, a feasibility study of trajectory generation strategies for high speed driving is conducted. The outcome of the simulation proved that the algorithms gave out good results with respect to the expected ogjectives of obstacle avoidance and global reference path following. Control and motion planning algorithms developed were validated offline by simulation with real data. They have been also tested on a realistic simulator.
27

3D Perception of Outdoor and Dynamic Environment using Laser Scanner / Perception 3D de l'environnement extérieur et dynamique utilisant Laser Scanner

Azim, Asma 17 December 2013 (has links)
Depuis des décennies, les chercheurs essaient de développer des systèmes intelligents pour les véhicules modernes, afin de rendre la conduite plus sûre et plus confortable. Ces systèmes peuvent conduire automatiquement le véhicule ou assister un conducteur en le prévenant et en l'assistant en cas de situations dangereuses. Contrairement aux conducteurs, ces systèmes n'ont pas de contraintes physiques ou psychologiques et font preuve d'une grande robustesse dans des conditions extrêmes. Un composant clé de ces systèmes est la fiabilité de la perception de l'environnement. Pour cela, les capteurs lasers sont très populaires et largement utilisés. Les capteurs laser 2D classiques ont des limites qui sont souvent compensées par l'ajout d'autres capteurs complémentaires comme des caméras ou des radars. Les avancées récentes dans le domaine des capteurs, telles que les capteurs laser 3D qui perçoivent l'environnement avec une grande résolution spatiale, ont montré qu'ils étaient une solution intéressante afin d'éviter l'utilisation de plusieurs capteurs. Bien qu'il y ait des méthodes bien connues pour la perception avec des capteurs laser 2D, les approches qui utilisent des capteurs lasers 3D sont relativement rares dans la littérature. De plus, la plupart d'entre elles utilisent plusieurs capteurs et réduisent le problème de la 3ème dimension en projetant les données 3D sur un plan et utilisent les méthodes classiques de perception 2D. Au contraire de ces approches, ce travail résout le problème en utilisant uniquement un capteur laser 3D et en utilisant les informations spatiales fournies par ce capteur. Notre première contribution est une extension des méthodes génériques de cartographie 3D fondée sur des grilles d'occupations optimisées pour résoudre le problème de cartographie et de localisation simultanée (SLAM en anglais). En utilisant des grilles d'occupations 3D, nous définissons une carte d'élévation pour la segmentation des données laser correspondant au sol. Pour corriger les erreurs de positionnement, nous utilisons une méthode incrémentale d'alignement des données laser. Le résultat forme la base pour le reste de notre travail qui constitue nos contributions les plus significatives. Dans la deuxième partie, nous nous focalisons sur la détection et le suivi des objets mobiles (DATMO en anglais). La deuxième contribution de ce travail est une méthode pour distinguer les objets dynamiques des objets statiques. L'approche proposée utilise une détection fondée sur le mouvement et sur des techniques de regroupement pour identifier les objets mobiles à partir de la grille d'occupations 3D. La méthode n'utilise pas de modèles spécifiques d'objets et permet donc la détection de tout type d'objets mobiles. Enfin, la troisième contribution est une méthode nouvelle pour classer les objets mobiles fondée sur une technique d'apprentissage supervisée. La contribution finale est une méthode pour suivre les objets mobiles en utilisant l'algorithme de Viterbi pour associer les nouvelles observations avec les objets présents dans l'environnement, Dans la troisième partie, l'approche propose est testée sur des jeux de données acquis à partir d'un capteur laser 3D monté sur le toit d'un véhicule qui se déplace dans différents types d'environnement incluant des environnements urbains, des autoroutes et des zones piétonnes. Les résultats obtenus montrent l'intérêt du système intelligent proposé pour la cartographie et la localisation simultanée ainsi que la détection et le suivi d'objets mobiles en environnement extérieur et dynamique en utilisant un capteur laser 3D. / With an anticipation to make driving experience safer and more convenient, over the decades, researchers have tried to develop intelligent systems for modern vehicles. The intended systems can either drive automatically or monitor a human driver and assist him in navigation by warning in case of a developing dangerous situation. Contrary to the human drivers, these systems are not constrained by many physical and psychological limitations and therefore prove more robust in extreme conditions. A key component of an intelligent vehicle system is the reliable perception of the environment. Laser range finders have been popular sensors which are widely used in this context. The classical 2D laser scanners have some limitations which are often compensated by the addition of other complementary sensors including cameras and radars. The recent advent of new sensors, such as 3D laser scanners which perceive the environment at a high spatial resolution, has proven to be an interesting addition to the arena. Although there are well-known methods for perception using 2D laser scanners, approaches using a 3D range scanner are relatively rare in literature. Most of those which exist either address the problem partially or augment the system with many other sensors. Surprisingly, many of those rely on reducing the dimensionality of the problem by projecting 3D data to 2D and using the well-established methods for 2D perception. In contrast to these approaches, this work addresses the problem of vehicle perception using a single 3D laser scanner. First contribution of this research is made by the extension of a generic 3D mapping framework based on an optimized occupancy grid representation to solve the problem of simultaneous localization and mapping (SLAM). Using the 3D occupancy grid, we introduce a variance-based elevation map for the segmentation of range measurements corresponding to the ground. To correct the vehicle location from odometry, we use a grid-based incremental scan matching method. The resulting SLAM framework forms a basis for rest of the contributions which constitute the major achievement of this work. After obtaining a good vehicle localization and a reliable map with ground segmentation, we focus on the detection and tracking of moving objects (DATMO). The second contribution of this thesis is the method for discriminating between the dynamic objects and the static environment. The presented approach uses motion-based detection and density-based clustering for segmenting the moving objects from 3D occupancy grid. It does not use object specific models but enables detecting arbitrary traffic participants. Third contribution is an innovative method for layered classification of the detected objects based on supervised learning technique which makes it easier to estimate their position with time. Final contribution is a method for tracking the detected objects by using Viterbi algorithm to associate the new observations with the existing objects in the environment. The proposed framework is verified with the datasets acquired from a laser scanner mounted on top of a vehicle moving in different environments including urban, highway and pedestrian-zone scenarios. The promising results thus obtained show the applicability of the proposed system for simultaneous localization and mapping with detection, classification and tracking of moving objects in dynamic outdoor environments using a single 3D laser scanner.
28

Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments / Fusion multi-capteur pour la détection, classification et suivi d'objets mobiles en environnement routier

Chavez Garcia, Ricardo Omar 25 September 2014 (has links)
Les systèmes avancés d'assistance au conducteur (ADAS) aident les conducteurs à effectuer des tâches de conduite complexes et à éviter ou atténuer les situations dangereuses. Le véhicule détecte le monde extérieur au moyen de capteurs, et ensuite construit et met à jour un modèle interne de la configuration de l'environnement. La perception de véhicule consiste à établir des relations spatiales et temporelles entre le véhicule et les obstacles statiques et mobiles dans l'environnement. Cette perception se compose de deux tâches principales : la localisation et cartographie simultanées (SLAM) traite de la modélisation de pièces statiques; et la détection et le suivi d'objets en mouvement (DATMO) est responsable de la modélisation des pièces mobiles dans l'environnement. Afin de réaliser un bon raisonnement et contrôle, le système doit modéliser correctement l'environnement. La détection précise et la classification des objets en mouvement est un aspect essentiel d'un système de suivi d'objets. Classification des objets en mouvement est nécessaire pour déterminer le comportement possible des objets entourant le véhicule, et il est généralement réalisée au niveau de suivi des objets. La connaissance de la classe d'objets en mouvement au niveau de la détection peut aider à améliorer leur suivi. La plupart des solutions de perception actuels considèrent informations de classification seulement comme information additional pour la sortie final de la perception. Aussi, la gestion de l'information incomplète est une exigence importante pour les systèmes de perception. Une information incomplète peut être originaire de raisons liées à la détection, tels que les problèmes d calibrage et les dysfonctionnements des capteurs; ou des perturbations de la scène, comme des occlusions, des problèmes de météo et objet déplacement. Les principales contributions de cette thèse se concentrent sur ​​la scène DATMO. Précisément, nous pensons que l'inclusion de la classe de l'objet comme un élément clé de la représentation de l'objet et la gestion de l'incertitude de plusieurs capteurs de détections, peut améliorer les résultats de la tâche de perception. Par conséquent, nous abordons les problèmes de l'association de données, la fusion de capteurs, la classification et le suivi à différents niveaux au sein de la phase de DATMO. Même si nous nous concentrons sur un ensemble de trois capteurs principaux: radar, lidar, et la caméra, nous proposons une architecture modifiables pour inclure un autre type ou nombre de capteurs. Premièrement, nous définissons une représentation composite de l'objet pour inclure des informations de classe et de l'état d'objet deouis le début de la tâche de perception. Deuxièmement, nous proposons, mettre en œuvre, et comparons deux architectures de perception afin de résoudre le problème de DATMO selon le niveau où l'association des objets, la fusion et la classification des informations sont inclus et appliquées. Nos méthodes de fusion de données sont basées sur la théorie de l'evidence, qui est utilisé pour gérer et inclure l'incertitude de la détection du capteur et de la classification des objets. Troisièmement, nous proposons une approche d'association de données bassée en la théorie de l'evidence pour établir une relation entre deux liste des détections d'objets. Quatrièmement, nous intégrons nos approches de fusion dans le cadre d'une application véhicule en temps réel. Cette intégration a été réalisée dans un réelle démonstrateur de véhicule du projet European InteractIVe. Finalement, nous avons analysé et évalué expérimentalement les performances des méthodes proposées. Nous avons comparé notre fusion rapproche les uns contre les autres et contre une méthode state-of-the-art en utilisant des données réelles de scénarios de conduite différents. Ces comparaisons sont concentrés sur la détection, la classification et le suivi des différents objets en mouvement: piétons, vélos, voitures et camions. / Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck.
29

Cognitively Guided Modeling of Visual Perception in Intelligent Vehicles

Plebe, Alice 20 April 2021 (has links)
This work proposes a strategy for visual perception in the context of autonomous driving. Despite the growing research aiming to implement self-driving cars, no artificial system can claim to have reached the driving performance of a human, yet. Humans---when not distracted or drunk---are still the best drivers you can currently find. Hence, the theories about the human mind and its neural organization could reveal precious insights on how to design a better autonomous driving agent. This dissertation focuses specifically on the perceptual aspect of driving, and it takes inspiration from four key theories on how the human brain achieves the cognitive capabilities required by the activity of driving. The first idea lies at the foundation of current cognitive science, and it argues that thinking nearly always involves some sort of mental simulation, which takes the form of imagery when dealing with visual perception. The second theory explains how the perceptual simulation takes place in neural circuits called convergence-divergence zones, which expand and compress information to extract abstract concepts from visual experience and code them into compact representations. The third theory highlights that perception---when specialized for a complex task as driving---is refined by experience in a process called perceptual learning. The fourth theory, namely the free-energy principle of predictive brains, corroborates the role of visual imagination as a fundamental mechanism of inference. In order to implement these theoretical principles, it is necessary to identify the most appropriate computational tools currently available. Within the consolidated and successful field of deep learning, I select the artificial architectures and strategies that manifest a sounding resemblance with their cognitive counterparts. Specifically, convolutional autoencoders have a strong correspondence with the architecture of convergence-divergence zones and the process of perceptual abstraction. The free-energy principle of predictive brains is related to variational Bayesian inference and the use of recurrent neural networks. In fact, this principle can be translated into a training procedure that learns abstract representations predisposed to predicting how the current road scenario will change in the future. The main contribution of this dissertation is a method to learn conceptual representations of the driving scenario from visual information. This approach forces a semantic internal organization, in the sense that distinct parts of the representation are explicitly associated to specific concepts useful in the context of driving. Specifically, the model uses as few as 16 neurons for each of the two basic concepts here considered: vehicles and lanes. At the same time, the approach biases the internal representations towards the ability to predict the dynamics of objects in the scene. This property of temporal coherence allows the representations to be exploited to predict plausible future scenarios and to perform a simplified form of mental imagery. In addition, this work includes a proposal to tackle the problem of opaqueness affecting deep neural networks. I present a method that aims to mitigate this issue, in the context of longitudinal control for automated vehicles. A further contribution of this dissertation experiments with higher-level spaces of prediction, such as occupancy grids, which could conciliate between the direct application to motor controls and the biological plausibility.
30

Investigating Antenna Placement on Autonomous Mining Vehicle

Manara, Luca January 2016 (has links)
Future mines will benefit from connected intelligent transport system technologies. Autonomous mining vehicles will improve safety and productivity while decreasing the fuel consumption. Hence, it is necessary for Scania to increase the know-how regarding the design of vehicular communication systems for the harsh mine environment. The scope of this work is to examine the requirements for the antenna placement of a future autonomous mining truck and propose suitable antenna types and positions. By using the electromagnetic simulator suite CST Microwave Studio, the research estimates the impact of a simplified autonomous mining vehicle geometry on basic antenna radiation patterns. Some simulated antenna configurations are assessed with radiation pattern measurements. In order to radiate enough power towards the area surrounding the vehicle and guarantee reliable communications, the truck requires omnidirectional antennas in centered locations, or alternatively one patch antenna for each side. The method used to solve the problem is also assessed: flexibility provided by the simulation method is emphasized, whereas some relevant limitations are discussed. Hardware requirements, availability of the models and limited results provided by the software can make the simulation phase not suitable to evaluate the antenna placement. / Framtidens gruvor kommer att gynnas av sammankopplade, intelligenta transportsystem. Autonoma gruvfordon kommer att förbättra säkerhet och produktivitet, och samtidigt minska bränslekonsumtion. Därför är det nödvändigt för Scania att öka kunskapen om design av kommunikationssystem för fordon i hård gruvmiljö. Målet för detta projekt är att undersöka kraven för antennplacering hos ett framtida autonomt gruvfordon och att ge förslag på passande antenntyper och -positioner. Det elektromagnetiska simuleringsverktyget CST Microwave Studio används för att uppskatta påverkan från en förenklad fordonsgeometri på grundläggande antennstrålningsmönster. Utvalda antennkonfigurationer utvärderas genom undersökningar av dess strålningsmönster. För att kunna stråla ut tillräcklig effekt i området kring fordonet och garantera tillförlitlig kommunikation krävs centralt placerade runtstrålande antenner, eller alternativt en patchantenn till varje sida. Problemlösningsmetoden utvärderas också: Flexibiliteten simuleringsmetoden ger betonas, medan några relevanta begränsningar diskuteras. Hårdvarukrav, tillgängligheten av modeller och begränsade resultat från mjukvaran kan bidra till att göra simuleringen olämplig för att utvärdera antennplaceringen.

Page generated in 0.0971 seconds