• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 8
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 19
  • 13
  • 12
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Navigability estimation for autonomous vehicles using machine learning / Estimação de navegabilidade para veículos autônomos usando aprendizado de máquina

Mendes, Caio César Teodoro 08 June 2017 (has links)
Autonomous navigation in outdoor, unstructured environments is one of the major challenges presents in the robotics field. One of its applications, intelligent autonomous vehicles, has the potential to decrease the number of accidents on roads and highways, increase the efficiency of traffic on major cities and contribute to the mobility of the disabled and elderly. For a robot/vehicle to safely navigate, accurate detection of navigable areas is essential. In this work, we address the task of visual road detection where, given an image, the objective is to classify its pixels into road or non-road. Instead of trying to manually derive an analytical solution for the task, we have used machine learning (ML) to learn it from a set of manually created samples. We have applied both traditional (shallow) and deep ML models to the task. Our main contribution regarding traditional ML models is an efficient and versatile way to aggregate spatially distant features, effectively providing a spatial context to such models. As for deep learning models, we have proposed a new neural network architecture focused on processing time and a new neural network layer called the semi-global layer, which efficiently provides a global context for the model. All the proposed methodology has been evaluated in the Karlsruhe Institute of Technology (KIT) road detection benchmark, achieving, in all cases, competitive results. / A navegação autônoma em ambientes externos não estruturados é um dos maiores desafios no campo da robótica. Uma das suas aplicações, os veículos inteligentes autônomos, tem o potencial de diminuir o número de acidentes nas estradas e rodovias, aumentar a eficiência do tráfego nas grandes cidades e contribuir para melhoria da mobilidade de deficientes e idosos. Para que um robô/veículo navegue com segurança, uma detecção precisa de áreas navegáveis é essencial. Neste trabalho, abordamos a tarefa de detecção visual de ruas onde, dada uma imagem, o objetivo é classificar cada um de seus pixels em rua ou não-rua. Ao invés de tentar derivar manualmente uma solução analítica para a tarefa, usamos aprendizado de máquina (AM) para aprendê-la a partir de um conjunto de amostras criadas manualmente. Nós utilizamos tanto modelos tradicionais (superficiais) quanto modelos profundos para a tarefa. A nossa principal contribuição em relação aos modelos tradicionais é uma forma eficiente e versátil de agregar características espacialmente distantes, fornecendo efetivamente um contexto espacial para esses modelos. Quanto aos modelos de aprendizagem profunda, propusemos uma nova arquitetura de rede neural focada no tempo de processamento e uma nova camada de rede neural, chamada camada semi-global, que fornece eficientemente um contexto global ao modelo. Toda a metodologia proposta foi avaliada no benchmark de detecção de ruas do Instituto de Tecnologia de Karlsruhe, alcançando, em todos os casos, resultados competitivos.
32

Adaptive Frontbeleuchtungssysteme im Kraftfahrzeug: Ein Beitrag zur nächtlichen Verkehrssicherheit?

Böhm, Michael 05 July 2013 (has links) (PDF)
Da die menschliche Sehleistung bei geringer Beleuchtung stark vermindert ist, birgt die Teilnahme am nächtlichen Straßenverkehr besondere Gefahren. Sowohl Kraftfahrzeugführer als auch schwächere Verkehrsteilnehmer sind sich dieser Problematik offenbar nicht hinlänglich bewusst und verhalten sich häufig hochriskant. Dies hat, gemessen an der Exposition, eine überproportionale Häufigkeit und Schwere von Nachtunfällen zur Folge. Um dieser Situation zu begegnen, erscheinen neben konventionellen Präventionsmaßnahmen der Verkehrsüberwachung und -erziehung oder Eingriffen in die Verkehrsinfrastruktur auch neuartige fahrzeugtechnische Systeme geeignet. So wurden in den letzten Jahren Fahrerassistenzfunktionen entwickelt, welche mittels adaptiver Lichtsteuerung die Ausleuchtung des Verkehrsraumes verbessern sollen. Hierfür wird das lichttechnische Signalbild anderer Fahrzeuge mittels einer Kamera erfasst und die eigene Scheinwerferlichtverteilung so angepasst, dass die Straße maximal ausgeleuchtet wird, um die Hinderniserkennung zu verbessern und trotzdem gleichzeitig eine Blendung anderer Kraftfahrer zu vermeiden. Als zusätzlich integrierte Funktion kommt auch eine automatisierte Fernlichtschaltung zum Einsatz. Bislang war nicht belegt, ob diese sogenannten Adaptiven Frontbeleuchtungssysteme (AFS) in der Lage sind, tatsächlich zu einer Erhöhung der nächtlichen Verkehrssicherheit beizutragen. Ziel der vorliegenden Arbeit war es, Anforderungen zur Blendungsvermeidung beim Einsatz derartiger Assistenzfunktionen aufzustellen und die Wirksamkeit adaptierter Scheinwerferlichtverteilungen zu bewerten. Hierfür wurden entsprechende empirische Untersuchungen durchgeführt. So konnten in der ersten Studie Blendungsgrenzwerte ermittelt werden, welche sicherstellen sollen, dass andere Verkehrsteilnehmer nicht über das bislang übliche Maß hinaus durch die Scheinwerfer geblendet werden, wenn neuartige AFS zum Einsatz kommen. In einem weiteren Experiment wurde geprüft, ob unter Einhaltung dieser Grenzwerte eine nennenswerte Erhöhung der Erkennbarkeitsentfernungen für schlecht sichtbare Hindernisse auf der Straße erreichbar ist. Die letzte Studie beschäftigte sich mit der Frage, in welchem Umfang adaptierte Lichtverteilungen im realen Straßenverkehr zum Einsatz kämen, um deren mögliche Wirksamkeit besser beurteilen zu können. Parallel hierzu wurde auch das Fernlichtnutzungsverhalten der Probanden untersucht. Wie die durchgeführten Untersuchungen zeigen konnten, ergeben sich durch den Einsatz adaptierter Lichtverteilungen signifikante Verbesserungen bezüglich der Erkennbarkeit von Hindernissen gegenüber konventioneller Kraftfahrzeugbeleuchtung in teils beträchtlichem Ausmaß. Außerdem konnte ermittelt werden, dass adaptierte Scheinwerferlichtverteilungen im realen Straßenverkehr in erheblichem Umfang zum Tragen kämen. Aufgrund der viel zu geringen Fernlichtnutzung könnten Kraftfahrer auch besonders stark von der automatisierten Fernlichtschaltung profitieren. Damit kann davon ausgegangen werden, dass neuartige AFS tatsächlich überaus geeignet sind, nächtliche Kollisionen von Kraftfahrzeugen mit unbeleuchteten schwächeren Verkehrsteilnehmern oder Wild zu vermeiden. Trotz dieser Einschätzung sind die letztlich zu erwartenden positiven Auswirkungen auf die Verkehrssicherheit womöglich eher gering, wenn es nicht gelingt, alle Verkehrsteilnehmer für die Gefahren des nächtlichen Straßenverkehrs zu sensibilisieren. Zudem können Adaptive Frontbeleuchtungssysteme selbstverständlich nicht allen Ursachen nächtlicher Kollisionen mit Hindernissen auf der Straße wirkungsvoll begegnen. / Since the human visual performance is substantially degraded under low illumination levels participating in nighttime traffic is particularly dangerous. Drivers as well as vulnerable road users are not sufficiently aware of this and therefore expose themselves to severe risks. Compared to overall exposure, a disproportionately high number of severe injuries and fatalities occur in nighttime traffic. Besides conventional approaches such as enforcement, education, and infrastructural measures, new automotive systems promise additional gains in road safety. Recently, Adaptive Frontlighting Systems (AFS) have been developed that are meant to improve road illumination in front of the car. Therefore, other lit vehicles are detected by a camera, which allows adapting the beam pattern according to the traffi c situation. The maximum of illumination is directed at the road to enhance object detection while omitting oncoming traffic to prevent glare to other drivers. This functionality also includes high beam automation. Up to now it has not been convincingly substantiated if so-called AFS are actually capable of increasing road safety. Thus, the aim of this thesis was to set up system specifications for the prevention of glare and to assess the impact of adapted light distributions by conducting adequate empirical studies. The first study identified illuminance thresholds in order to assure that other drivers will not suffer from glare when AFS are applied that are beyond present levels caused by regular low beams. The second experiment examined if the adaptation of beam patters within these identified limits improves detection distances for unlit obstacles on the road. The last study examined the extent of AFS’ applicability in real nighttime traffic, to better estimate the possible efficacy of such systems. The high beam usage behavior of the test subjects was also analyzed within this driving study. Adapted beam patterns turned out to significantly improve obstacle detection in comparison to conventional low beams. It was found that adaptive lighting functions could cover a substantial part of time driven in rural areas. Besides, high beam automation could dramatically increase high beam usage since drivers mostly fail to maintain manual switching. Taking these findings into consideration AFS seem to be suited to prevent collisions with unlit obstacles during nighttime driving. However, their impact on road safety could remain marginal unless road users are sensitized for the dangers of participating in traffic during darkness. Moreover, AFS cannot counteract all causes of nighttime collisions.
33

Road Surface Modeling using Stereo Vision / Modellering av Vägyta med hjälp av Stereokamera

Lorentzon, Mattis, Andersson, Tobias January 2012 (has links)
Modern day cars are often equipped with a variety of sensors that collect information about the car and its surroundings. The stereo camera is an example of a sensor that in addition to regular images also provides distances to points in its environment. This information can, for example, be used for detecting approaching obstacles and warn the driver if a collision is imminent or even automatically brake the vehicle. Objects that constitute a potential danger are usually located on the road in front of the vehicle which makes the road surface a suitable reference level from which to measure the object's heights. This Master's thesis describes how an estimate of the road surface can be found to in order to make these height measurements. The thesis describes how the large amount of data generated by the stereo camera can be scaled down to a more effective representation in the form of an elevation map. The report discusses a method for relating data from different instances in time using information from the vehicle's motion sensors and shows how this method can be used for temporal filtering of the elevation map. For estimating the road surface two different methods are compared, one that uses a RANSAC-approach to iterate for a good surface model fit and one that uses conditional random fields for modeling the probability of different parts of the elevation map to be part of the road. A way to detect curb lines and how to use them to improve the road surface estimate is shown. Both methods for road classification show good results with a few differences that are discussed towards the end of the report. An example of how the road surface estimate can be used to detect obstacles is also included.
34

Vision Based Obstacle Detection And Avoidance Using Low Level Image Features

Senlet, Turgay 01 April 2006 (has links) (PDF)
This study proposes a new method for obstacle detection and avoidance using low-level MPEG-7 visual descriptors. The method includes training a neural network with a subset of MPEG-7 visual descriptors extracted from outdoor scenes. The trained neural network is then used to estimate the obstacle presence in real outdoor videos and to perform obstacle avoidance. In our proposed method, obstacle avoidance solely depends on the estimated obstacle presence data. In this study, backpropagation algorithm on multi-layer perceptron neural network is utilized as a feature learning method. MPEG-7 visual descriptors are used to describe basic features of the given scene image and by further processing these features, input data for the neural network is obtained. The learning/training phase is carried out on specially constructed synthetic video sequence with known obstacles. Validation and tests of the algorithms are performed on actual outdoor videos. Tests on indoor videos are also performed to evaluate the performance of the proposed algorithms in indoor scenes. Throughout the study, OdBot 2 robot platform, which has been developed by the author, is used as reference platform. For final testing of the obstacle detection and avoidance algorithms, simulation environment is used. From the simulation results and tests performed on video sequences, it can be concluded that the proposed obstacle detection and avoidance methods are robust against visual changes in the environment that are common to most of the outdoor videos. Findings concerning the used methods are presented and discussed as an outcome of this study.
35

Navigability estimation for autonomous vehicles using machine learning / Estimação de navegabilidade para veículos autônomos usando aprendizado de máquina

Caio César Teodoro Mendes 08 June 2017 (has links)
Autonomous navigation in outdoor, unstructured environments is one of the major challenges presents in the robotics field. One of its applications, intelligent autonomous vehicles, has the potential to decrease the number of accidents on roads and highways, increase the efficiency of traffic on major cities and contribute to the mobility of the disabled and elderly. For a robot/vehicle to safely navigate, accurate detection of navigable areas is essential. In this work, we address the task of visual road detection where, given an image, the objective is to classify its pixels into road or non-road. Instead of trying to manually derive an analytical solution for the task, we have used machine learning (ML) to learn it from a set of manually created samples. We have applied both traditional (shallow) and deep ML models to the task. Our main contribution regarding traditional ML models is an efficient and versatile way to aggregate spatially distant features, effectively providing a spatial context to such models. As for deep learning models, we have proposed a new neural network architecture focused on processing time and a new neural network layer called the semi-global layer, which efficiently provides a global context for the model. All the proposed methodology has been evaluated in the Karlsruhe Institute of Technology (KIT) road detection benchmark, achieving, in all cases, competitive results. / A navegação autônoma em ambientes externos não estruturados é um dos maiores desafios no campo da robótica. Uma das suas aplicações, os veículos inteligentes autônomos, tem o potencial de diminuir o número de acidentes nas estradas e rodovias, aumentar a eficiência do tráfego nas grandes cidades e contribuir para melhoria da mobilidade de deficientes e idosos. Para que um robô/veículo navegue com segurança, uma detecção precisa de áreas navegáveis é essencial. Neste trabalho, abordamos a tarefa de detecção visual de ruas onde, dada uma imagem, o objetivo é classificar cada um de seus pixels em rua ou não-rua. Ao invés de tentar derivar manualmente uma solução analítica para a tarefa, usamos aprendizado de máquina (AM) para aprendê-la a partir de um conjunto de amostras criadas manualmente. Nós utilizamos tanto modelos tradicionais (superficiais) quanto modelos profundos para a tarefa. A nossa principal contribuição em relação aos modelos tradicionais é uma forma eficiente e versátil de agregar características espacialmente distantes, fornecendo efetivamente um contexto espacial para esses modelos. Quanto aos modelos de aprendizagem profunda, propusemos uma nova arquitetura de rede neural focada no tempo de processamento e uma nova camada de rede neural, chamada camada semi-global, que fornece eficientemente um contexto global ao modelo. Toda a metodologia proposta foi avaliada no benchmark de detecção de ruas do Instituto de Tecnologia de Karlsruhe, alcançando, em todos os casos, resultados competitivos.
36

Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos / Obstacle detection using 3D perception and radar data fusion in automotive vehicles

Rosero, Luis Alberto Rosero 30 January 2017 (has links)
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores. / This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
37

Detecção de tráfego rodoviário utilizando visão computacional: um algoritmo de apoio ao motorista

Cappellari, Márcio Junior 24 June 2010 (has links)
Submitted by William Justo Figueiro (williamjf) on 2015-07-28T22:40:52Z No. of bitstreams: 1 36d.pdf: 3256209 bytes, checksum: 5eb35739b38219bacee9f707cf78f91b (MD5) / Made available in DSpace on 2015-07-28T22:40:52Z (GMT). No. of bitstreams: 1 36d.pdf: 3256209 bytes, checksum: 5eb35739b38219bacee9f707cf78f91b (MD5) Previous issue date: 2010 / Nenhuma / A preocupação com a segurança no trânsito é tão antiga quanto a história do automóvel e muitos são os esforços das montadoras, dos órgãos públicos e de pesquisa, visando diminuir o número de acidentes e de vítimas do trânsito. Muitos dos acidentes que acontecem são atribuídos a falha humana dos motoristas, que por imprudência e/ou imperícia, não conseguem perceber obstáculos a tempo de evitar uma colisão. Entenda-se por obstáculo outro veículo, um pedestre na pista, e até mesmo uma árvore, animal ou qualquer objeto que obstrua a passagem do condutor e que poderá causar um acidente. De fato, este trabalho esta focado na identificação de outros veículos. O presente trabalho apresenta um algoritmo capaz de detectar obstáculos na pista por visão computacional. Trata-se de um veículo equipado com uma câmera monocular embarcada, com processamento e identificação de obstáculos em tempo real, apoiando o motorista sobre a presença destes obstáculos no campo de visão da câmera, e sobre a aproximação destes com risco de colisão. Outros sensores, como radar, infra-vermelho, sonar poderiam apoiar na detecção de obstáculos, porém, é premissa deste estudo, desenvolver o algoritmo utilizando recursos de baixo custo e focada no processamento de imagens. Inicialmente, procurar-se-á a delimitação da região de busca por obstáculos, também chamada de região de interesse, através da detecção das bordas da pista. Na sequência o detector trabalhará na geração de hipóteses (HG), com a identificação de candidatos a obstáculos, para sobre eles processar a etapa de verificação da hipótese e assim confirmar ou negar a presença de obstáculos reais. São considerados atributos da imagem como cor/intensidade, simetria, quinas, bordas, linhas horizontais e verticais, e calibração de câmera. Além disso, treinou-se um classificador de cascata considerando um conjunto de características Haar. / The concern for traffic safety is as old as the automobile history, and many are the efforts of carmakers, public agencies and research in order to decrease the number of accidents and victims of traffic accidents. Many of the accidents that happen are attributed to human failure. Because of reckless driving and / or malpractice, they can not see obstacles with enough time to avoid a collision. There are many types of obstacles: a vehicle, a pedestrian, a tree, or even an animal. Any object that obstructs the passage of the driver can cause an accident. This work is focused on identifying only other vehicles as obstacles. This work presents an algorithm capable of detecting an obstacle on the track by computer vision. The project uses a vehicle equipped with a monocular camera, for processing and identification of obstacles in real time, supporting the driver about the presence of them on the road, and alarting him about collision risks. Other sensors, such as radar, infrared, or sonar could assist in obstacle detecting, however, the premise of this study is to develop an algorithm using low-cost resources and focused on image processing. Initially, we will start with the delineation of the region for obstacles search, also called the region of interest (ROI), by detecting the runway lanes. Next, the detector will work on hypothesis generation (HG), identifying candidates for obstacles, and then processing them on the hypothesis verification stage, to confirm or deny the presence of real obstacles. The main attributes considered are image color / intensity, symmetry, edges, borders, horizontal and vertical lines and camera calibration. Also, using Haar Like Features, the classifier cascade was trained.
38

Development of algorithms and architectures for driving assistance in adverse weather conditions using FPGAs / Développement d'algorithmes et d'architectures pour l'aide à la conduite dans des conditions météorologiques défavorables en utilisant les FPGA

Botero galeano, Diego andres 05 December 2012 (has links)
En raison de l'augmentation du volume et de la complexité des systèmes de transport, de nouveaux systèmes avancés d'assistance à la conduite (ADAS) sont étudiés dans de nombreuses entreprises, laboratoires et universités. Ces systèmes comprennent des algorithmes avec des techniques qui ont été étudiés au cours des dernières décennies, comme la localisation et cartographie simultanées (SLAM), détection d'obstacles, la vision stéréoscopique, etc. Grâce aux progrès de l'électronique, de la robotique et de plusieurs autres domaines, de nouveaux systèmes embarqués sont développés pour garantir la sécurité des utilisateurs de ces systèmes critiques. Pour la plupart de ces systèmes, une faible consommation d'énergie ainsi qu'une taille réduite sont nécessaires. Cela crée la contrainte d'exécuter les algorithmes sur les systèmes embarqués avec des ressources limitées. Dans la plupart des algorithmes, en particulier pour la vision par ordinateur, une grande quantité de données doivent être traitées à des fréquences élevées, ce qui exige des ressources informatiques importantes. Un FPGA satisfait cette exigence, son architecture parallèle combinée à sa faible consommation d'énergie et la souplesse pour les programmer permet de développer et d'exécuter des algorithmes plus efficacement que sur d'autres plateformes de traitement. Les composants virtuels développés dans cette thèse ont été utilisés dans trois différents projets: PICASSO (vision stéréoscopique), COMMROB (détection d'obstacles à partir d'une système multicaméra) et SART (Système d'Aide au Roulage tous Temps). / Due to the increase of traffic volume and complexity of new transport systems, new Advanced Driver Assistance Systems (ADAS) are a subject of research of many companies, laboratories and universities. These systems include algorithms with techniques that have been studied during the last decades like Simultaneous Lo- calization and Mapping (SLAM), obstacle detection, stereo vision, etc. Thanks to the advances in electronics, robotics and other domains, new embedded systems are being developed to guarantee the safety of the users of these critical systems. For most of these systems a low power consumption as well as reduced size is required. It creates the constraint of execute the algorithms in embedded devices with limited resources. In most of algorithms, moreover for computer vision ones, a big amount of data must be processed at high frequencies, this amount of data demands strong computing resources. FPGAs satisfy this requirement; its parallel architecture combined with its low power consumption and exibility allows developing and executing some algorithms more efficiently than any other processing platforms. In this thesis different embedded computer vision architectures intended to be used in ADAS using FPGAs are presented such as: We present the implementation of a distortion correction architecture operating at 100 Hz in two cameras simultaneously. The correction module allows also to rectify two images for implementation of stereo vision. Obstacle detection algorithms based on Inverse Perspective Mapping (IPM) and classiffication based on Color/Texture attributes are presented. The IPM transform is based in the perspective effect of a scene perceived from two different points of view. Moreover results of the detection algorithms from color/texture attributes applied on a multi-cameras system, are fused in an occupancy grid. An accelerator to apply homographies on images, is presented; this accelerator can be used for different applications like the generation of Bird's eye view or Side view. Multispectral vision is studied using both infrared images and color ones. Syn- thetic images are generated from information acquired from visible and infrared sources to provide a visual aid to the driver. Image enhancement specific for infrared images is also implemented and evaluated, based on the Contrast Lim- ited Adaptive Histogram Equalization (CLAHE). An embedded SLAM algorithm is presented with different hardware acceler- ators (point detection, landmark tracking, active search, correlation, matrix operations). All the algorithms were simulated, implemented and verified using as target FPGAs. The validation was done using development kits. A custom board integrating all the presented algorithms is presented. Virtual components developed in this thesis were used in three different projects: PICASSO (stereo vision), COMMROB (obstacle detection from a multi-cameras system) and SART (multispectral vision).
39

Road scene perception based on fisheye camera, LIDAR and GPS data combination / Perception de la route par combinaison des données caméra fisheye, Lidar et GPS

Fang, Yong 24 September 2015 (has links)
La perception de scènes routières est un domaine de recherche très actif. Cette thèse se focalise sur la détection et le suivi d’objets par fusion de données d’un système multi-capteurs composé d’un télémètre laser, une caméra fisheye et un système de positionnement global (GPS). Plusieurs étapes de la chaîne de perception sont ´ étudiées : le calibrage extrinsèque du couple caméra fisheye / télémètre laser, la détection de la route et enfin la détection et le suivi d’obstacles sur la route.Afin de traiter les informations géométriques du télémètre laser et de la caméra fisheye dans un repère commun, une nouvelle approche de calibrage extrinsèque entre les deux capteurs est proposée. La caméra fisheye est d’abord calibrée intrinsèquement. Pour cela, trois modèles de la littérature sont étudiés et comparés. Ensuite, pour le calibrage extrinsèque entre les capteurs,la normale au plan du télémètre laser est estimée par une approche de RANSAC couplée `a une régression linéaire `a partir de points connus dans le repère des deux capteurs. Enfin une méthode des moindres carres basée sur des contraintes géométriques entre les points connus, la normale au plan et les données du télémètre laser permet de calculer les paramètres extrinsèques. La méthode proposée est testée et évaluée en simulation et sur des données réelles.On s’intéresse ensuite `a la détection de la route à partir des données issues de la caméra fisheye et du télémètre laser. La détection de la route est initialisée `a partir du calcul de l’image invariante aux conditions d’illumination basée sur l’espace log-chromatique. Un seuillage sur l’histogramme normalisé est appliqué pour classifier les pixels de la route. Ensuite, la cohérence de la détection de la route est vérifiée en utilisant les mesures du télémètre laser. La segmentation de la route est enfin affinée en exploitant deux détections de la route successives. Pour cela, une carte de distance est calculée dans l’espace couleur HSI (Hue,Saturation, Intensity). La méthode est expérimentée sur des données réelles. Une méthode de détection d’obstacles basée sur les données de la caméra fisheye, du télémètre laser, d’un GPS et d’une cartographie routière est ensuite proposée. On s’intéresse notamment aux objets mobiles apparaissant flous dans l’image fisheye. Les régions d’intérêts de l’image sont extraites `a partir de la méthode de détection de la route proposée précédemment. Puis, la détection dans l’image du marquage de la ligne centrale de la route est mise en correspondance avec un modelé de route reconstruit `a partir des données GPS et cartographiques. Pour cela, la transformation IPM (Inverse Perspective Mapping) est appliquée à l’image. Les régions contenant potentiellement des obstacles sont alors extraites puis confirmées à l’aide du télémètre laser.L’approche est testée sur des données réelles et comparée `a deux méthodes de la littérature. Enfin, la dernière problématique étudiée est le suivi temporel des obstacles détectés `a l’aide de l’utilisation conjointe des données de la caméra fisheye et du télémètre laser. Pour cela, les resultats de détection d’obstacles précédemment obtenus sont exploit ´es ainsi qu’une approche de croissance de région. La méthode proposée est également testée sur des données réelles. / Road scene understanding is one of key research topics of intelligent vehicles. This thesis focuses on detection and tracking of obstacles by multisensors data fusion and analysis. The considered system is composed of a lidar, a fisheye camera and aglobal positioning system (GPS). Several steps of the perception scheme are studied: extrinsic calibration between fisheye camera and lidar, road detection and obstacles detection and tracking. Firstly, a new method for extinsic calibration between fisheye camera and lidar is proposed. For intrinsic modeling of the fisheye camera, three models of the literatureare studied and compared. For extrinsic calibration between the two sensors, the normal to the lidar plane is firstly estimated based on the determination of ń known ż points. The extrinsic parameters are then computed using a least square approachbased on geometrical constraints, the lidar plane normal and the lidar measurements. The second part of this thesis is dedicated to road detection exploiting both fisheye camera and lidar data. The road is firstly coarse detected considering the illumination invariant image. Then the normalised histogram based classification is validated using the lidar data. The road segmentation is finally refined exploiting two successive roaddetection results and distance map computed in HSI color space. The third step focuses on obstacles detection, especially in case of motion blur. The proposed method combines previously detected road, map, GPS and lidar information.Regions of interest are extracted from previously road detection. Then road central lines are extracted from the image and matched with road shape model extracted from 2DŋSIG map. Lidar measurements are used to validated the results.The final step is object tracking still using fisheye camera and lidar. The proposed method is based on previously detected obstacles and a region growth approach. All the methods proposed in this thesis are tested, evaluated and compared to stateŋofŋtheŋart approaches using real data acquired with the IRTESŋSET laboratory experimental platform.
40

Obstacle detection using a monocular camera

Goroshin, Rostislav 19 May 2008 (has links)
The objective of this thesis is to develop a general obstacle segmentation algorithm for use on board a ground based unmanned vehicle (GUV). The algorithm processes video data captured by a single monocular camera mounted on the GUV. We make the assumption that the GUV moves on a locally planar surface, representing the ground plane. We start by deriving the equations of the expected motion field (observed by the camera) induced by the motion of the robot on the ground plane. Given an initial view of a presumably static scene, this motion field is used to generate a predicted view of the same scene after a known camera displacement. This predicted image is compared to the actual image taken at the new camera location by means of an optical flow calculation. Because the planar assumption is used to generate the predicted image, portions of the image which mismatch the prediction correspond to salient feature points on objects which lie above or below the ground plane, we consider these objects obstacles for the GUV. We assume that these salient feature points (called seed pixels ) capture the color statistics of the obstacle and use them to initialize a Bayesian region growing routine to generate a full obstacle segmentation. Alignment of the seed pixels with the obstacle is not guaranteed due to the aperture problem, however successful segmentations were obtained for natural scenes. The algorithm was tested off line using video captured by a camera mounted on a GUV.

Page generated in 0.109 seconds