• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 13
  • 3
  • 1
  • Tagged with
  • 33
  • 33
  • 14
  • 11
  • 10
  • 9
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Avaliação e proposta de sistemas de câmeras estéreo para detecção de pedestres em veículos inteligentes / Stereo cameras systems evaluation and proposal for pedestrian detection on intelligent vehicles

Nakamura, Angelica Tiemi Mizuno 06 December 2017 (has links)
Detecção de pedestres é uma importante área em visão computacional com o potencial de salvar vidas quando aplicada em veículos. Porém, essa aplicação exige detecções em tempo real, com alta acurácia e menor quantidade de falsos positivos possível. Durante os últimos anos, diversas ideias foram exploradas e os métodos mais recentes que utilizam arquiteturas profundas de redes neurais possibilitaram um grande avanço nesta área, melhorando significativamente o desempenho das detecções. Apesar desse progresso, a detecção de pedestres que estão distantes do veículo continua sendo um grande desafio devido às suas pequenas escalas na imagem, sendo necessária a avaliação da eficácia dos métodos atuais em evitar ou atenuar a gravidade dos acidentes de trânsito que envolvam pedestres. Dessa forma, como primeira proposta deste trabalho, foi realizado um estudo para avaliar a aplicabilidade dos métodos estado-da-arte para evitar colisões em cenários urbanos. Para isso, a velocidade e dinâmica do veículo, o tempo de reação e desempenho dos métodos de detecção foram considerados. Através do estudo, observou-se que em ambientes de tráfego rápido ainda não é possível utilizar métodos visuais de detecção de pedestres para assistir o motorista, pois nenhum deles é capaz de detectar pedestres que estão distantes do veículo e, ao mesmo tempo, operar em tempo real. Mas, ao considerar apenas pedestres em maiores escalas, os métodos tradicionais baseados em janelas deslizantes já conseguem atingir um bom desempenho e rápida execução. Dessa forma, com a finalidade de restringir a operação dos detectores apenas para pedestres em maiores escalas e assim, possibilitar a aplicação de métodos visuais em veículos, foi proposta uma configuração de câmeras que possibilitou obter imagens para um maior intervalo de distância à frente do veículo com pedestres em resolução quase duas vezes maior em comparação à uma câmera comercial. Resultados experimentais mostraram considerável melhora no desempenho das detecções, possibilitando superar a dificuldade causada pelas pequenas escalas dos pedestres nas imagens. / Pedestrian detection is an important area in computer vision with the potential to save lives when applied on vehicles. This application requires accurate detections and real-time operation, keeping the number of false positives as minimal as possible. Over the past few years, several ideas were explored, including approaches with deep network architectures, which have reached considerably better performances. However, detecting pedestrians far from the camera is still challenging due to their small sizes on images, making it necessary to evaluate the effectiveness of existing approaches on avoiding or reducing traffic accidents that involves pedestrians. Thus, as the first proposal of this work, a study was done to verify the state-of-the-art methods applicability for collision avoidance in urban scenarios. For this, the speed and dynamics of the vehicle, the reaction time and performance of the detection methods were considered. The results from this study show that it is still not possible to use a vision-based pedestrian detector for driver assistance on urban roads with fast moving traffic, since none of them is able to handle real-time pedestrian detection. However, for large-scale pedestrians on images, methods based on sliding window approach can already perform reliably well with fast inference time. Thus, in order to restrict the operation of detectors only for pedestrians in larger scales and enable the application of vision-based methods in vehicles, it was proposed a camera setup that provided to get images for a larger range of distances in front of the vehicle with pedestrians resolution almost twice as large compared to a commercial camera. Experimental results reveal a considerable enhancement on detection performance, overcoming the difficulty caused by the reduced scales that far pedestrians have on images.
12

Estimação de obstáculos e área de pista com pontos 3D esparsos / Estimation of obstacles and road area with sparse 3D points

Shinzato, Patrick Yuri 26 March 2015 (has links)
De acordo com a Organização Mundial da Saúde,cerca de 1,2milhões de pessoas no mundo morrem em acidentes de trânsito. Sistemas de assistência ao motorista e veículos autônomos podem diminuir o número de acidentes. Dentre as várias demandas existentes para viabilizar essa tecnologia, sistemas computacionais de percepção ainda permanecem sem uma solução definitiva. Dois deles, detecção de obstáculos e de via navegável, normalmente fazem uso de algoritmos sofisticados como técnicas de aprendizado supervisionado, que mostram resultados impressionantes quando treinados com bases de dados bem definidas e diversificadas.Entretanto, construir, manter e atualizar uma base de dados com exemplos de vários lugares do mundo e em diversas situações é trabalhoso e complexo. Assim, métodos adaptativos e auto-supervisionados mostram-se como boas alternativas para sistemas de detecção do futuro próximo. Neste contexto, esta tese apresenta um método para estimar obstáculose via navegável através de sensores de baixo custo (câmeras estereoscópicas), sem o uso de técnicas de aprendizado de máquina e de diversas suposições normalmente utilizadas por trabalhos já disponíveis na literatura. Esses métodos utilizando sensor estereoscópico foram comparados fazendo uso de sensores do tipo 3D-LIDAR e mostraram resultados semelhantes. Este sistema poderá ser usado como uma fase pré-processamento de dados para melhorar ou viabilizar métodos adaptativos de aprendizado. / World wide, an estimated 1.2million lives are lostin road crashes each year and Advanced Driver Assistance Systems (ADAS) and Self-driving cars promise to reduce this number. Among the various issues to complete this technology, perception systems are still an unsolved issues. Normally two of them, obstacle detection and road detection, make use of sophisticated algorithms such as supervised machine learning methods which can perform with impressive results if it was trained with good data sets. Since it is a complex and an expensive job to create and maintain data bases of scenarios from the entire world, adaptive and/or self-supervised methods are good candidates for detection systems in the near future. Due that, this thesis present a method to estimate obsta- cles and estimate the road terrain using low cost sensors (stereo camera), avoiding supervised machine learning techniques and the most common assumptions used by works presented in literature. These methods were compared with 3D-LIDAR approaches achieving similar results and thus it can be used as a pre-processing step to improve or allow adaptive methods with machine learning systems.
13

Cooperation stereo mouvement pour la detection des objets dynamiques / Stereo-Motion Cooperation - Dynamic Objects Detection

Bak, Adrien 14 October 2011 (has links)
Un grand nombre d'applications de robotique embarquées pourrait bénéficier d'une détection explicite des objets mobiles. A ce jour, la majorité des approches présentées repose sur la classification, ou sur une analyse structurelle de la scène (la V-Disparité est un bon exemple de ces approches). Depuis quelques années, nous sommes témoins d'un intérêt croissant pour les méthodes faisant collaborer activement l'analyse structurelle et l'analyse du mouvement. Ces deux processus sont en effet étroitement liés. Dans ce contexte, nous proposons, à travers de travail de thèse, deux approches différentes. Si la première fait appel à l'intégralité de l'information stéréo/mouvement, la seconde se penche sur le cas des capteurs monoculaires, et permet de retrouver une information partielle.La première approche présentée consiste en un système innovation d'odométrie visuelle. Nous avons en effet démontré que le problème d'odométrie visuelle peut être posé de façon linéaire, alors que l'immense majorité des auteurs sont contraint de faire appel à des méthodes d'optimisation non-linéaires. Nous avons également montré que notre approche permet d'atteindre, voire de dépasser le niveau de performances présenté par des système matériels haut de gamme (type centrale inertielle). A partir de ce système d'odométrie visuelle, nous définissons une procédure permettant de détecter les objets mobiles. Cette procédure repose sur une compensation de l'influence de l'égo-mouvement, puis une mesure du mouvement résiduel. Nous avons ensuite mené une réflexion de fond sur les limitations et les sources d'amélioration de ce système. Il nous est apparu que les principaux paramètres du système de vision (base, focale) ont un impact de premier plan sur les performances du détecteur. A notre connaissance, cet impact n'a jamais été décrit dans la littérature. Il nous semble cependant que nos conclusions peuvent constituer un ensemble de recommandations utiles à tout concepteur de système de vision intelligent.La seconde partie de ce travail porte sur les systèmes de vision monoculaire, et plus précisément sur le concept de C-Vélocité. Alors que la V-Disparité a défini une transformée de la carte de disparité permettant de mettre en avant certains plans de l'image, la C-Vélocité défini une transformée du champ de flot optique, et qui utilise la position du FoE, qui permet une détection facile de certains plans spécifiques de l'image. Dans ce travail, nous présentons une modification de la C-Vélocité. Au lieu d'utiliser un a priori sur l'égo-mouvement (la position du FoE) afin d'inférer la structure de la scène, nous utilisons un a priori sur la structure de la scène afin de localiser le FoE, donc d'estimer l'égo-mouvement translationnel. Les premiers résultats de ce travail sont encourageants et nous permettent d'ouvrir plusieurs pistes de recherches futures. / Many embedded robotic applications could benefit from an explicit detection of mobile objects. To this day, most approaches rely on classification, or on some structural scene analysis (for instance, V-Disparity). During the last few years, we've witnessed a growing interest for collaboration methods, that use actively btw structural analysis and motion analysis. These two processes are, indeed, closely related. In this context, we propose, through this study, two novel approaches that address this issue. While the first one use information from stereo and motion, the second one focuses on monocular systems, and allows us to retrieve a partial information.The first presented approach consists in a novel visual odometry system. We have shown that, even though the wide majority of authors tackle the visual odometry problem as non-linear, it can be shown to be purely linear. We have also shown that our approach achieves performances, as good as, or even better than the ones achieved by high-end IMUs. Given this visual odometry system, we then define a procedure allowing us to detect mobile objects. This procedure relies on a compensation of the ego-motion and a measure of the residual motion. We then lead a reflexion on the causes of limitation and the possible sources of improvement of this system. It appeared that the main parameters of the vision system (baseline, focal length) have a major impact on the performances of our detector. To the best of our knowledge, this impact had never been discussed, prior to our study. However, we think that our conclusion could be used as a set of recommendations, useful for every designer of intelligent vision system.the second part of this work focuses on monocular systems, and more specifically on the concept of C-Velocity. When V-Disparity defined a disparity map transform, allowing an easy detection of specific planes, C-Velocity defines a transform of the optical flow field, using the position of the FoE, allowing an easy detection of specific planes. Through this work, we present a modification of the C-Velocity concept. Instead of using a priori knowledge of the ego-motion (the position of the FoE) in order to determine the scene structure, we use a prior knowledge of the scene structure in order to localize the FoE, thus the translational ego-motion. the first results of this work are promising, and allow us to define several future works.
14

Avaliação e proposta de sistemas de câmeras estéreo para detecção de pedestres em veículos inteligentes / Stereo cameras systems evaluation and proposal for pedestrian detection on intelligent vehicles

Angelica Tiemi Mizuno Nakamura 06 December 2017 (has links)
Detecção de pedestres é uma importante área em visão computacional com o potencial de salvar vidas quando aplicada em veículos. Porém, essa aplicação exige detecções em tempo real, com alta acurácia e menor quantidade de falsos positivos possível. Durante os últimos anos, diversas ideias foram exploradas e os métodos mais recentes que utilizam arquiteturas profundas de redes neurais possibilitaram um grande avanço nesta área, melhorando significativamente o desempenho das detecções. Apesar desse progresso, a detecção de pedestres que estão distantes do veículo continua sendo um grande desafio devido às suas pequenas escalas na imagem, sendo necessária a avaliação da eficácia dos métodos atuais em evitar ou atenuar a gravidade dos acidentes de trânsito que envolvam pedestres. Dessa forma, como primeira proposta deste trabalho, foi realizado um estudo para avaliar a aplicabilidade dos métodos estado-da-arte para evitar colisões em cenários urbanos. Para isso, a velocidade e dinâmica do veículo, o tempo de reação e desempenho dos métodos de detecção foram considerados. Através do estudo, observou-se que em ambientes de tráfego rápido ainda não é possível utilizar métodos visuais de detecção de pedestres para assistir o motorista, pois nenhum deles é capaz de detectar pedestres que estão distantes do veículo e, ao mesmo tempo, operar em tempo real. Mas, ao considerar apenas pedestres em maiores escalas, os métodos tradicionais baseados em janelas deslizantes já conseguem atingir um bom desempenho e rápida execução. Dessa forma, com a finalidade de restringir a operação dos detectores apenas para pedestres em maiores escalas e assim, possibilitar a aplicação de métodos visuais em veículos, foi proposta uma configuração de câmeras que possibilitou obter imagens para um maior intervalo de distância à frente do veículo com pedestres em resolução quase duas vezes maior em comparação à uma câmera comercial. Resultados experimentais mostraram considerável melhora no desempenho das detecções, possibilitando superar a dificuldade causada pelas pequenas escalas dos pedestres nas imagens. / Pedestrian detection is an important area in computer vision with the potential to save lives when applied on vehicles. This application requires accurate detections and real-time operation, keeping the number of false positives as minimal as possible. Over the past few years, several ideas were explored, including approaches with deep network architectures, which have reached considerably better performances. However, detecting pedestrians far from the camera is still challenging due to their small sizes on images, making it necessary to evaluate the effectiveness of existing approaches on avoiding or reducing traffic accidents that involves pedestrians. Thus, as the first proposal of this work, a study was done to verify the state-of-the-art methods applicability for collision avoidance in urban scenarios. For this, the speed and dynamics of the vehicle, the reaction time and performance of the detection methods were considered. The results from this study show that it is still not possible to use a vision-based pedestrian detector for driver assistance on urban roads with fast moving traffic, since none of them is able to handle real-time pedestrian detection. However, for large-scale pedestrians on images, methods based on sliding window approach can already perform reliably well with fast inference time. Thus, in order to restrict the operation of detectors only for pedestrians in larger scales and enable the application of vision-based methods in vehicles, it was proposed a camera setup that provided to get images for a larger range of distances in front of the vehicle with pedestrians resolution almost twice as large compared to a commercial camera. Experimental results reveal a considerable enhancement on detection performance, overcoming the difficulty caused by the reduced scales that far pedestrians have on images.
15

Distance and Tracking Control for Autonomous Vehicles

Hitchings, Mark R., n/a January 1999 (has links)
The author's concept of the distance and tracking control problem for autonomous vehicles relates to the cooperative behaviour of two successive vehicles travelling in the same environment. This behaviour requires one vehicle, designated the leader to move autonomously around it's environment with other vehicles, designated followers maintaining a coincident travel path and desired longitudinal distance with respect to the leader. Distance and tracking control is beneficial in numerous applications including guiding autonomous vehicles in Intelligent Transport Systems (ITS) which increases traffic safety and the capacity of pre-existing road infrastructure. Service robotics may also benefit from the cost savings and flexibility offered by distance and tracking control which enables a number of robots to cooperate together in order to achieve a task beyond the capabilities ofjust one robot. Using a distance and tracking control scheme an intelligent leader robot may guide a number of less intelligent (and therefore less costly and less complex) followers to a work-site to perform a task. The author's approach to the distance and tracking control problem consisted of two separate solutions - an initial solution used as a starting point and learning experience and a second, more robust, fuzzy control-based solution. This thesis briefly describes the initial solution, but places a greater emphasis on the second solution. The reason for this is that the fuzzy control-based solution offers significant improvement on the initial solution and was developed based on conclusions drawn from the initial solution. Most implementations of distance and tracking control, sometimes referred to as Intelligent Cruise Control (ICC) or platooning, are limited to longitudinal distance control only. The leader tracking control is performed either implicitly by a separate lane-following control system or by human drivers. The fuzzy control-based solution offered in this thesis performs both distance and tracking control of an autonomous follower vehicle with respect to a leader vehicle in front of it. It represents a simple and cost effective solution to the requirements of autonomous vehicles operating in ITS schemes - particularly close formation platooning. The follower tracks a laser signal emitted by the leader and monitors the distance to the follower at the same time using ultrasonic ranging techniques. The follower uses the data obtained from these measuring techniques as inputs to a fuzzy controller algorithm to adjust its distance and alignment with respect to the leader. Other systems employed on road vehicles utilise video-based leader tracking, or a range of lane-following methods such as magnetometer or video-based methods. Typically these methods are disadvantaged by substantial unit and/or infrastructure costs associated with their deployment. The limitations associated with the solutions presented in this thesis arise in curved trajectories at larger longitudinal distance separations between vehicles. The effects of these limitations on road vehicles has yet to be fully quantified, however it is thought that these effects would not disadvantage its use in close formation platooning. The fuzzy control-based distance and tracking control solution features two inputs, which are the distance and alignment of the follower with respect to the leader. The fuzzy controller asserts two outputs, which are left and right wheel velocities to control the speed and trajectory of a differential drive vehicle. Each of the input and output fuzzy membership functions has seven terms based around lambda, Z-type and S-type functions. The fuzzy rule base consists of 49 rules and the fuzzy inference stage is based on the MAX/MIN method. A Centre of Maximum (CoM) def'uzzification method is used to provide the two crisp valued outputs to the vehicle motion control. The methods chosen for the fuzzy control of distance and tracking for autonomous vehicles were selected based on a compromise between their computational complexity and performance characteristics. This compromise was necessary in order to implement the chosen controller structure on pre-existing hardware test beds based on an 8-bit microcontrollers with limited memory and processing resources. Overall the fuzzy control-based solution presented in this thesis effectively solves the distance and tracking control problem. The solution was applied to differential drive hardware test-beds and was tested to verify performance. The solution was thoroughly tested in both the simulation environment and on hardware test-beds. Several issues are identified in this thesis regarding the application of the solution to other platforms and road vehicle use. The solution will be shown to be directly portable to service robotics applications and, with minor modifications, applicable to road vehicle close-formation platooning.
16

Estimação de obstáculos e área de pista com pontos 3D esparsos / Estimation of obstacles and road area with sparse 3D points

Patrick Yuri Shinzato 26 March 2015 (has links)
De acordo com a Organização Mundial da Saúde,cerca de 1,2milhões de pessoas no mundo morrem em acidentes de trânsito. Sistemas de assistência ao motorista e veículos autônomos podem diminuir o número de acidentes. Dentre as várias demandas existentes para viabilizar essa tecnologia, sistemas computacionais de percepção ainda permanecem sem uma solução definitiva. Dois deles, detecção de obstáculos e de via navegável, normalmente fazem uso de algoritmos sofisticados como técnicas de aprendizado supervisionado, que mostram resultados impressionantes quando treinados com bases de dados bem definidas e diversificadas.Entretanto, construir, manter e atualizar uma base de dados com exemplos de vários lugares do mundo e em diversas situações é trabalhoso e complexo. Assim, métodos adaptativos e auto-supervisionados mostram-se como boas alternativas para sistemas de detecção do futuro próximo. Neste contexto, esta tese apresenta um método para estimar obstáculose via navegável através de sensores de baixo custo (câmeras estereoscópicas), sem o uso de técnicas de aprendizado de máquina e de diversas suposições normalmente utilizadas por trabalhos já disponíveis na literatura. Esses métodos utilizando sensor estereoscópico foram comparados fazendo uso de sensores do tipo 3D-LIDAR e mostraram resultados semelhantes. Este sistema poderá ser usado como uma fase pré-processamento de dados para melhorar ou viabilizar métodos adaptativos de aprendizado. / World wide, an estimated 1.2million lives are lostin road crashes each year and Advanced Driver Assistance Systems (ADAS) and Self-driving cars promise to reduce this number. Among the various issues to complete this technology, perception systems are still an unsolved issues. Normally two of them, obstacle detection and road detection, make use of sophisticated algorithms such as supervised machine learning methods which can perform with impressive results if it was trained with good data sets. Since it is a complex and an expensive job to create and maintain data bases of scenarios from the entire world, adaptive and/or self-supervised methods are good candidates for detection systems in the near future. Due that, this thesis present a method to estimate obsta- cles and estimate the road terrain using low cost sensors (stereo camera), avoiding supervised machine learning techniques and the most common assumptions used by works presented in literature. These methods were compared with 3D-LIDAR approaches achieving similar results and thus it can be used as a pre-processing step to improve or allow adaptive methods with machine learning systems.
17

Multi-modal, Multi-Domain Pedestrian Detection and Classification : Proposals and Explorations in Visible over StereoVision, FIR and SWIR

Miron, Alina Dana 16 July 2014 (has links) (PDF)
The main purpose of constructing Intelligent Vehicles is to increase the safety for all traffic participants. The detection of pedestrians, as one of the most vulnerable category of road users, is paramount for any Advance Driver Assistance System (ADAS). Although this topic has been studied for almost fifty years, a perfect solution does not exist yet. This thesis focuses on several aspects regarding pedestrian classification and detection, and has the objective of exploring and comparing multiple light spectrums (Visible, ShortWave Infrared, Far Infrared) and modalities (Intensity, Depth by Stereo Vision, Motion).From the variety of images, the Far Infrared cameras (FIR), capable of measuring the temperature of the scene, are particular interesting for detecting pedestrians. These will usually have higher temperature than the surroundings. Due to the lack of suitable public datasets containing Thermal images, we have acquired and annotated a database, that we will name RIFIR, containing both Visible and Far-Infrared Images. This dataset has allowed us to compare the performance of different state of the art features in the two domains. Moreover, we have proposed a new feature adapted for FIR images, called Intensity Self Similarity (ISS). The ISS representation is based on the relative intensity similarity between different sub-blocks within a pedestrian region of interest. The experiments performed on different image sequences have showed that, in general, FIR spectrum has a better performance than the Visible domain. Nevertheless, the fusion of the two domains provides the best results. The second domain that we have studied is the Short Wave Infrared (SWIR), a light spectrum that was never used before for the task of pedestrian classification and detection. Unlike FIRcameras, SWIR cameras can image through the windshield, and thus be mounted in the vehicle's cabin. In addition, SWIR imagers can have the ability to see clear at long distances, making it suitable for vehicle applications. We have acquired and annotated a database, that we will name RISWIR, containing both Visible and SWIR images. This dataset has allowed us to compare the performance of different pedestrian classification algorithms, along with a comparison between Visible and SWIR. Our tests have showed that SWIR might be promising for ADAS applications,performing better than the Visible domain on the considered dataset. Even if FIR and SWIR have provided promising results, Visible domain is still widely used due to the low cost of the cameras. The classical monocular imagers used for object detectionand classification can lead to a computational time well beyond real-time. Stereo Vision providesa way of reducing the hypothesis search space through the use of depth information contained in the disparity map. Therefore, a robust disparity map is essential in order to have good hypothesis over the location of pedestrians. In this context, in order to compute the disparity map, we haveproposed different cost functions robust to radiometric distortions. Moreover, we have showed that some simple post-processing techniques can have a great impact over the quality of the obtained depth images.The use of the disparity map is not strictly limited to the generation of hypothesis, and couldbe used for some feature computation by providing complementary information to color images.We have studied and compared the performance of features computed from different modalities(Intensity, Depth and Flow) and in two domains (Visible and FIR). The results have showed that the most robust systems are the ones that take into consideration all three modalities, especially when dealing with occlusions.
18

Lane-level vehicle localization with integrity monitoring for data aggregation / Estimation intègre par les véhicules de leur voie de circulation pour l’agrégation de données

Li, Franck 18 December 2018 (has links)
Les informations contenues dans les cartes routières numériques revêtent une importance grandissante dans le domaine des véhicules intelligents. La prise en compte d’environnements de plus en plus complexes a augmenté le niveau de précision exigé des informations cartographiques. Les cartes routières numériques, considérées ici comme des bases de données géographiques, contiennent des informations contextuelles sur le réseau routier, facilitant la compréhension correcte de l’environnement. En les combinant avec les données provenant des capteurs embarqués, une représentation plus fine de l’environnement peut être obtenue, améliorant grandement la compréhension de contexte du véhicule et la prise de décision. La performance des différents capteurs peut varier grandement en fonction du lieu considéré, ceci étant principalement dû à des facteurs environnementaux. Au contraire, une carte peut fournir ses informations de manière fiable, sans être affectée par ces éléments extérieurs, mais pour cela, elle doit reposer sur un autre élément essentiel : une source de localisation. Le secteur automobile utilise les systèmes de localisation globale par satellite (GNSS) à des fins de localisation absolue, mais cette solution n’est pas parfaite, étant soumise à différentes sources d’erreur. Ces erreurs sont elles aussi dépendantes de l’environnent d’évolution du véhicule (par exemple, des multi-trajets causés par des bâtiments). Nous sommes donc en présence de deux systèmes centraux, dont les performances sont d´dépendantes du lieu considéré. Cette étude se focalise sur leur dénominateur commun : la carte routière numérique, et son utilisation en tant qu’outil d’évaluation de leur performance. L’idée développée durant cette thèse est d’utiliser la carte en tant que canevas d’apprentissage, pour stocker des informations géoréférencées sur la performance des diésèrent capteurs équipant le véhicule, au cours de trajets répétitifs. Pour cela, une localisation robuste, relative à la carte, est nécessaire au travers d’une méthode de map-matching. La problématique principale réside dans la différence de précision entre la carte et le positionnement GNSS, créant des situations ambigües. Durant cette thèse, un algorithme de map-matching a été conçu pour gérer ces ambigüités en fournissant des hypothèses multiples lorsque nécessaire. L’objectif est d’assurer l’intégrité de l’algorithme en retournant un ensemble d’hypothèses contenant l’hypothèse correcte avec une grande probabilité. Cet algorithme utilise les capteurs proprioceptifs dans une approche de navigation à l’estime aidée d’informations cartographiques. Une procédure d’évaluation de cohérence, utilisant le GNSS comme information redondante de positionnement est ensuite appliquée, visant à isoler une hypothèse cohérente unique qui pourra ainsi être utilisée avec confiance dans le processus d’écriture dans la carte. L’utilisation de la carte numérique en écriture/lecture a été évaluée et la procédure complète d’écriture a été testée sur des données réelles, enregistrées par des véhicules expérimentaux sur route ouverte. / The information stored in digital road maps has become very important for intelligent vehicles. As intelligent vehicles address more complex environments, the accuracy requirements for this information have increased. Regarded as a geographic database, digital road maps contain contextual information about the road network, crucial for a good understanding of the environment. When combined with data acquired from on-board sensors, a better representation of the environment can be made, improving the vehicle’s situation understanding. Sensors performance can vary drastically depending on the location of the vehicle, mainly due to environmental factors. Comparatively, a map can provide prior information more reliably but to do so, it depends on another essential component: a localization system. Global Navigation Satellite Systems (GNSS) are commonly used in automotive to provide an absolute positioning of the vehicle, but its accuracy is not perfect: GNSS are prone to errors, also depending greatly on the environment (e.g., multipaths). Perception and localization systems are two important components of an intelligent vehicle whose performances vary in function of the vehicle location. This research focuses on their common denominator, the digital road map, and its use as a tool to assess their performance. The idea developed during this thesis is to use the map as a learning canvas, to store georeferenced information about the performance of the sensors during repetitive travels. This requires a robust localization with respect to the map to be available, through a process of map-matching. The main problematic is the discrepancy between the accuracy of the map and of the GNSS, creating ambiguous situations. This thesis develops a map-matching algorithm designed to cope with these ambiguities by providing multiple hypotheses when necessary. The objective is to ensure the integrity of the result by returning a hypothesis set containing the correct matching with high probability. The method relies on proprioceptive sensors via a dead-reckoning approach aided by the map. A coherence checking procedure using GNSS redundant information is then applied to isolate a single map-matching result that can be used to write learning data with confidence in the map. The possibility to handle the digital map in read/write operation has been assessed and the whole writing procedure has been tested on data recorded by test vehicles on open roads.
19

Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem

Alves, Thiago Waszak January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.
20

Sistema de identificação de superfícies navegáveis baseado em visão computacional e redes neurais artificiais / Navigable surfaces identification system based on computer vision and artificial neural networks

Shinzato, Patrick Yuri 22 November 2010 (has links)
A navegação autônoma é um dos problemas fundamentais da robótica móvel. Para um robô executar esta tarefa, é necessário determinar a região segura para a navegação. Este trabalho propõe um sistema de identificação de superfícies navegáveis baseado em visão computacional utilizando redes neurais artificiais. Mais especificamente, é realizado um estudo sobre a utilização de diferentes atributos de imagem, como descritores estatísticos e elementos de espaços de cores, para serem utilizados como entrada das redes neurais artificiais que tem como tarefa a identificação de superfícies navegáveis. O sistema desenvolvido utiliza resultados de classificação de múltiplas configurações de redes neurais artificiais, onde a principal diferença entre elas é o conjunto de atributos de imagem utilizados como entrada. Essa combinação de diversas classificações foi realizada visando maior robustez e melhor desempenho na identificação de vias em diferentes cenários / Autonomous navigation is a fundamental problem in mobile robotics. In order to perform this task, a robot must identify the areas where it can navigate safely. This dissertation proposes a navigable terrain identification system based on computer vision and neural networks. More specifically, it is presented a study of image attributes, such as statistical decriptors and elements of different color spaces, that are used as neural neworks inputs for the navigable surfaces identification. The system developed combines the classification results of multiple neural networks topologies with different image attributes. This combination of classification results allows for improved efficient and robustenes in different scenarios

Page generated in 0.0734 seconds