Spelling suggestions: "subject:"obstacle detection"" "subject:"abstacle detection""
41 |
Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos / Obstacle detection using 3D perception and radar data fusion in automotive vehiclesLuis Alberto Rosero Rosero 30 January 2017 (has links)
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores. / This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
|
42 |
Bezkolizn navigace mobilnho robotu / Mobile robot navigation with obstacle avoidanceSttesk, Vladimr January 2015 (has links)
Thesis deals with automatic guided mobile robot focused on obstacle avoidance during ride on planned route. There are summaries of usually used obstacle detecting sensors and algorithms used for path finding. Based on this, own solution is designed. It uses waypoints changes to pass obstacle. MATLAB simulation is created for tests of new designed method. This method is implemented to real robot for real world testing. Reached goals and upgrade possibilities are summarized in bottom of thesis.
|
43 |
Approches 2D/2D pour le SFM à partir d'un réseau de caméras asynchrones / 2D/2D approaches for SFM using an asynchronous multi-camera networkMhiri, Rawia 14 December 2015 (has links)
Les systèmes d'aide à la conduite et les travaux concernant le véhicule autonome ont atteint une certaine maturité durant ces dernières aimées grâce à l'utilisation de technologies avancées. Une étape fondamentale pour ces systèmes porte sur l'estimation du mouvement et de la structure de l'environnement (Structure From Motion) pour accomplir plusieurs tâches, notamment la détection d'obstacles et de marquage routier, la localisation et la cartographie. Pour estimer leurs mouvements, de tels systèmes utilisent des capteurs relativement chers. Pour être commercialisés à grande échelle, il est alors nécessaire de développer des applications avec des dispositifs bas coûts. Dans cette optique, les systèmes de vision se révèlent une bonne alternative. Une nouvelle méthode basée sur des approches 2D/2D à partir d'un réseau de caméras asynchrones est présentée afin d'obtenir le déplacement et la structure 3D à l'échelle absolue en prenant soin d'estimer les facteurs d'échelle. La méthode proposée, appelée méthode des triangles, se base sur l'utilisation de trois images formant un triangle : deux images provenant de la même caméra et une image provenant d'une caméra voisine. L'algorithme admet trois hypothèses: les caméras partagent des champs de vue communs (deux à deux), la trajectoire entre deux images consécutives provenant d'une même caméra est approximée par un segment linéaire et les caméras sont calibrées. La connaissance de la calibration extrinsèque entre deux caméras combinée avec l'hypothèse de mouvement rectiligne du système, permet d'estimer les facteurs d'échelle absolue. La méthode proposée est précise et robuste pour les trajectoires rectilignes et présente des résultats satisfaisants pour les virages. Pour affiner l'estimation initiale, certaines erreurs dues aux imprécisions dans l'estimation des facteurs d'échelle sont améliorées par une méthode d'optimisation : un ajustement de faisceaux local appliqué uniquement sur les facteurs d'échelle absolue et sur les points 3D. L'approche présentée est validée sur des séquences de scènes routières réelles et évaluée par rapport à la vérité terrain obtenue par un GPS différentiel. Une application fondamentale dans les domaines d'aide à la conduite et de la conduite automatisée est la détection de la route et d'obstacles. Pour un système asynchrone, une première approche pour traiter cette application est présentée en se basant sur des cartes de disparité éparses. / Driver assistance systems and autonomous vehicles have reached a certain maturity in recent years through the use of advanced technologies. A fundamental step for these systems is the motion and the structure estimation (Structure From Motion) that accomplish several tasks, including the detection of obstacles and road marking, localisation and mapping. To estimate their movements, such systems use relatively expensive sensors. In order to market such systems on a large scale, it is necessary to develop applications with low cost devices. In this context, vision systems is a good alternative. A new method based on 2D/2D approaches from an asynchronous multi-camera network is presented to obtain the motion and the 3D structure at the absolute scale, focusing on estimating the scale factors. The proposed method, called Triangle Method, is based on the use of three images forming a. triangle shape: two images from the same camera and an image from a neighboring camera. The algorithrn has three assumptions: the cameras share common fields of view (two by two), the path between two consecutive images from a single camera is approximated by a line segment, and the cameras are calibrated. The extrinsic calibration between two cameras combined with the assumption of rectilinear motion of the system allows to estimate the absolute scale factors. The proposed method is accurate and robust for straight trajectories and present satisfactory results for curve trajectories. To refine the initial estimation, some en-ors due to the inaccuracies of the scale estimation are improved by an optimization method: a local bundle adjustment applied only on the absolute scale factors and the 3D points. The presented approach is validated on sequences of real road scenes, and evaluated with respect to the ground truth obtained through a differential GPS. Finally, another fundamental application in the fields of driver assistance and automated driving is road and obstacles detection. A method is presented for an asynchronous system based on sparse disparity maps
|
44 |
Robust Object Detection under Varying Illuminations and DistortionsJanuary 2020 (has links)
abstract: Object detection is an interesting computer vision area that is concerned with the detection of object instances belonging to specific classes of interest as well as the localization of these instances in images and/or videos. Object detection serves as a vital module in many computer vision based applications. This work focuses on the development of object detection methods that exhibit increased robustness to varying illuminations and image quality. In this work, two methods for robust object detection are presented.
In the context of varying illumination, this work focuses on robust generic obstacle detection and collision warning in Advanced Driver Assistance Systems (ADAS) under varying illumination conditions. The highlight of the first method is the ability to detect all obstacles without prior knowledge and detect partially occluded obstacles including the obstacles that have not completely appeared in the frame (truncated obstacles). It is first shown that the angular distortion in the Inverse Perspective Mapping (IPM) domain belonging to obstacle edges varies as a function of their corresponding 2D location in the camera plane. This information is used to generate object proposals. A novel proposal assessment method based on fusing statistical properties from both the IPM image and the camera image to perform robust outlier elimination and false positive reduction is also proposed.
In the context of image quality, this work focuses on robust multiple-class object detection using deep neural networks for images with varying quality. The use of Generative Adversarial Networks (GANs) is proposed in a novel generative framework to generate features that provide robustness for object detection on reduced quality images. The proposed GAN-based Detection of Objects (GAN-DO) framework is not restricted to any particular architecture and can be generalized to several deep neural network (DNN) based architectures. The resulting deep neural network maintains the exact architecture as the selected baseline model without adding to the model parameter complexity or inference speed. Performance results provided using GAN-DO on object detection datasets establish an improved robustness to varying image quality and a higher object detection and classification accuracy compared to the existing approaches. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
|
45 |
Adaptive Frontbeleuchtungssysteme im Kraftfahrzeug: Ein Beitrag zur nächtlichen Verkehrssicherheit?: Adaptive Frontbeleuchtungssysteme im Kraftfahrzeug:Ein Beitrag zur nächtlichen Verkehrssicherheit?Böhm, Michael 25 June 2013 (has links)
Da die menschliche Sehleistung bei geringer Beleuchtung stark vermindert ist, birgt die Teilnahme am nächtlichen Straßenverkehr besondere Gefahren. Sowohl Kraftfahrzeugführer als auch schwächere Verkehrsteilnehmer sind sich dieser Problematik offenbar nicht hinlänglich bewusst und verhalten sich häufig hochriskant. Dies hat, gemessen an der Exposition, eine überproportionale Häufigkeit und Schwere von Nachtunfällen zur Folge. Um dieser Situation zu begegnen, erscheinen neben konventionellen Präventionsmaßnahmen der Verkehrsüberwachung und -erziehung oder Eingriffen in die Verkehrsinfrastruktur auch neuartige fahrzeugtechnische Systeme geeignet. So wurden in den letzten Jahren Fahrerassistenzfunktionen entwickelt, welche mittels adaptiver Lichtsteuerung die Ausleuchtung des Verkehrsraumes verbessern sollen. Hierfür wird das lichttechnische Signalbild anderer Fahrzeuge mittels einer Kamera erfasst und die eigene Scheinwerferlichtverteilung so angepasst, dass die Straße maximal ausgeleuchtet wird, um die Hinderniserkennung zu verbessern und trotzdem gleichzeitig eine Blendung anderer Kraftfahrer zu vermeiden. Als zusätzlich integrierte Funktion kommt auch eine automatisierte Fernlichtschaltung zum Einsatz.
Bislang war nicht belegt, ob diese sogenannten Adaptiven Frontbeleuchtungssysteme (AFS) in der Lage sind, tatsächlich zu einer Erhöhung der nächtlichen Verkehrssicherheit beizutragen. Ziel der vorliegenden Arbeit war es, Anforderungen zur Blendungsvermeidung beim Einsatz derartiger Assistenzfunktionen aufzustellen und die Wirksamkeit adaptierter Scheinwerferlichtverteilungen zu bewerten. Hierfür wurden entsprechende empirische Untersuchungen durchgeführt. So konnten in der ersten Studie Blendungsgrenzwerte ermittelt werden, welche sicherstellen sollen, dass andere Verkehrsteilnehmer nicht über das bislang übliche Maß hinaus durch die Scheinwerfer geblendet werden, wenn neuartige AFS zum Einsatz kommen. In einem weiteren Experiment wurde geprüft, ob unter Einhaltung dieser Grenzwerte eine nennenswerte Erhöhung der Erkennbarkeitsentfernungen für schlecht sichtbare Hindernisse auf der Straße erreichbar ist. Die letzte Studie beschäftigte sich mit der Frage, in welchem Umfang adaptierte Lichtverteilungen im realen Straßenverkehr zum Einsatz kämen, um deren mögliche Wirksamkeit besser beurteilen zu können. Parallel hierzu wurde auch das Fernlichtnutzungsverhalten der Probanden untersucht.
Wie die durchgeführten Untersuchungen zeigen konnten, ergeben sich durch den Einsatz adaptierter Lichtverteilungen signifikante Verbesserungen bezüglich der Erkennbarkeit von Hindernissen gegenüber konventioneller Kraftfahrzeugbeleuchtung in teils beträchtlichem Ausmaß. Außerdem konnte ermittelt werden, dass adaptierte Scheinwerferlichtverteilungen im realen Straßenverkehr in erheblichem Umfang zum Tragen kämen. Aufgrund der viel zu geringen Fernlichtnutzung könnten Kraftfahrer auch besonders stark von der automatisierten Fernlichtschaltung profitieren. Damit kann davon ausgegangen werden, dass neuartige AFS tatsächlich überaus geeignet sind, nächtliche Kollisionen von Kraftfahrzeugen mit unbeleuchteten schwächeren Verkehrsteilnehmern oder Wild zu vermeiden. Trotz dieser Einschätzung sind die letztlich zu erwartenden positiven Auswirkungen auf die Verkehrssicherheit womöglich eher gering, wenn es nicht gelingt, alle Verkehrsteilnehmer für die Gefahren des nächtlichen Straßenverkehrs zu sensibilisieren. Zudem können Adaptive Frontbeleuchtungssysteme selbstverständlich nicht allen Ursachen nächtlicher Kollisionen mit Hindernissen auf der Straße wirkungsvoll begegnen. / Since the human visual performance is substantially degraded under low illumination levels participating in nighttime traffic is particularly dangerous. Drivers as well as vulnerable road users are not sufficiently aware of this and therefore expose themselves to severe risks. Compared to overall exposure, a disproportionately high number of severe injuries and fatalities occur in nighttime traffic. Besides conventional approaches such as enforcement, education, and infrastructural measures, new automotive systems promise additional gains in road safety. Recently, Adaptive Frontlighting Systems (AFS) have been developed that are meant to improve road illumination in front of the car. Therefore, other lit vehicles are detected by a camera, which allows adapting the beam pattern according to the traffi c situation. The maximum of illumination is directed at the road to enhance object detection while omitting oncoming traffic to prevent glare to other drivers. This functionality also includes high beam automation.
Up to now it has not been convincingly substantiated if so-called AFS are actually capable of increasing road safety. Thus, the aim of this thesis was to set up system specifications for the prevention of glare and to assess the impact of adapted light distributions by conducting adequate empirical studies. The first study identified illuminance thresholds in order to assure that other drivers will not suffer from glare when AFS are applied that are beyond present levels caused by regular low beams. The second experiment examined if the adaptation of beam patters within these identified limits improves detection distances for unlit obstacles on the road. The last study examined the extent of AFS’ applicability in real nighttime traffic, to better estimate the possible efficacy of such systems. The high beam usage behavior of the test subjects was also analyzed within this driving study.
Adapted beam patterns turned out to significantly improve obstacle detection in comparison to conventional low beams. It was found that adaptive lighting functions could cover a substantial part of time driven in rural areas. Besides, high beam automation could dramatically increase high beam usage since drivers mostly fail to maintain manual switching. Taking these findings into consideration AFS seem to be suited to prevent collisions with unlit obstacles during nighttime driving. However, their impact on road safety could remain marginal unless road users are sensitized for the dangers of participating in traffic during darkness. Moreover, AFS cannot counteract all causes of nighttime collisions.
|
46 |
Millimeter Wave Radar as Navigation Sensor on Robotic Vacuum Cleaner / Millimetervågsradar som navigationssensor på robotdammsugareBlomqvist, Anneli January 2020 (has links)
Does millimeter-wave radar have the potential to be the navigational instrument of a robotic vacuum cleaner in a home? Electrolux robotic vacuum cleaner is currently using a light sensor to navigate through the home while cleaning. Recently Texas Instruments released a new mmWave radar sensor, operating in the frequency range 60-64 GHz. This study aims to answer if the mmWave radar sensor is useful for indoor navigation. The study tests the sensor on accuracy and resolution of angles and distances in ranges relevant to indoor navigation. It tests if various objects made out of plastic, fabric, paper, metal, and wood are detectable by the sensor. At last, it tests what the sensor can see when it is moving while measuring. The radar sensor can localize the robot, but the ability to detect objects around the robot is limited. The sensor’s absolute accuracy is within 3° for the majority of angles and around 1dm for most distances above 0.5 m. The resolution for a displacement of one object is 1°, respectively 5 cm, and two objects must be located at least 14° or 15 cm apart from each other to be recognized. Future tasks include removing noise due to antenna coupling to improve reflections from within 0.5 meter and figure out the best way to move around the sensor to improve the resolution. / Har radar med millimetervågor förutsättningar att vara navigationsutrustning för en robotdammsugare i ett hem? Electrolux robotdammsugare använder för närvarande en ljussensor för att navigera genom hemmet medan den städar. Nyligen släppte Texas Instruments en ny radarsensor med vågor i frekvensområdet 60-64 GHz. Denna studie syftar till att svara om radarsensorn är användbar för inomhusnavigering. Studien testar sensorn med avseende på noggrannhet och upplösning av vinklar och avstånd i områden som är relevanta för inomhusnavigering. Den testar om olika föremål tillverkade av plast, tyg, papper, metall och trä kan detekteras av sensorn. Slutligen testas vad sensorn kan se om den rör sig medan den mäter. Radarsensorn kan positionera roboten, men hinderdetektering omkring roboten är begränsad. För det mesta ligger sensorns absoluta noggrannhet inom 3° för vinklar och omkring 1dm för avstånd över 0,5 m. Upplösningen för en förflyttning av ett objekt är 1° respektive 5 cm, och två objekt måste placeras minst 14° eller 15 cm ifrån varandra för att båda kunna upptäckas. Kommande utmaningar är att ta bort antennstörningar som ger sämre reflektioner inom 0,5 meter och ta reda på det bästa sättet att förflytta sensorn för att förbättra upplösningen.
|
47 |
Fusão de informações obtidas a partir de múltiplas imagens visando à navegação autônoma de veículos inteligentes em abiente agrícola / Data fusion obtained from multiple images aiming the navigation of autonomous intelligent vehicles in agricultural environmentUtino, Vítor Manha 08 April 2015 (has links)
Este trabalho apresenta um sistema de auxilio à navegação autônoma para veículos terrestres com foco em ambientes estruturados em um cenário agrícola. É gerada a estimativa das posições dos obstáculos baseado na fusão das detecções provenientes do processamento dos dados de duas câmeras, uma estéreo e outra térmica. Foram desenvolvidos três módulos de detecção de obstáculos. O primeiro módulo utiliza imagens monoculares da câmera estéreo para detectar novidades no ambiente através da comparação do estado atual com o estado anterior. O segundo módulo utiliza a técnica Stixel para delimitar os obstáculos acima do plano do chão. Por fim, o terceiro módulo utiliza as imagens térmicas para encontrar assinaturas que evidenciem a presença de obstáculo. Os módulos de detecção são fundidos utilizando a Teoria de Dempster-Shafer que fornece a estimativa da presença de obstáculos no ambiente. Os experimentos foram executados em ambiente agrícola real. Foi executada a validação do sistema em cenários bem iluminados, com terreno irregular e com obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas três módulos de detecção com metodologias que não tem por objetivo priorizar a confirmação de obstáculos, mas sim a busca de novos obstáculos. Nesta dissertação são apresentados os principais componentes de um sistema de detecção de obstáculos e as etapas necessárias para a sua concepção, assim como resultados de experimentos com o uso de um veículo real. / This work presents a support system to the autonomous navigation for ground vehicles with focus on structured environments in an agricultural scenario. The estimated obstacle positions are generated based on the fusion of the detections from the processing of data from two cameras, one stereo and other thermal. Three modules obstacle detection have been developed. The first module uses monocular images of the stereo camera to detect novelties in the environment by comparing the current state with the previous state. The second module uses Stixel technique to delimit the obstacles above the ground plane. Finally, the third module uses thermal images to find signatures that reveal the presence of obstacle. The detection modules are fused using the Dempster-Shafer theory that provides an estimate of the presence of obstacles in the environment. The experiments were executed in real agricultural environment. System validation was performed in well-lit scenarios, with uneven terrain and different obstacles. The system showed satisfactory performance considering the use of an approach based on only three detection modules with methods that do not prioritize obstacle confirmation, but the search for new ones. This dissertation presents the main components of an obstacle detection system and the necessary steps for its design as well as results of experiments with the use of a real vehicle.
|
48 |
[en] OBSTACLE DETECTION AND AVOIDANCE SYSTEM FOR UAV S, BASED ON NEURO-FUZZY CONTROLLER / [pt] SISTEMA DE DETECÇÃO E DESVIO DE OBSTÁCULOS PARA VANTS, BASEADO EM CONTROLADOR NEURO-FUZZYVINICIUS DE MELLO LIMA 16 April 2019 (has links)
[pt] Esta dissertação apresenta o projeto e desenvolvimento de um sistema para detecção e desvio de obstáculos para veículos aéreos não tripulados (VANTs), implementado por um controlador neuro-fuzzy. Neste contexto, este trabalho apresenta uma revisão teórica sobre veículos aéreos não tripuláveis, legislação brasileira aplicável, métodos de detecção de obstáculos, lógica nebulosa e redes neurais. O controlador desenvolvido foi implementado de forma a imitar as ações realizadas por um operador humano, visando desviar de obstáculos encontrados no caminho de navegação do VANT. Regras de inferência são estabelecidas baseadas na consultoria de especialistas da área e os pesos ajustados pela rede neural. O processo de tomada de decisão ocorre levando em consideração as informações coletadas por um Lidar multicanal e sensores ultrassônicos embarcados no VANT. Por sua vez, o algoritmo desenvolvido foi incorporado em um controlador de vôo comercial. O sistema completo do quadricóptero é detalhado, destacando as principais características de todos os sensores e do controlador de vôo. Os resultados das simulações computacionais e testes experimentais são apresentados, discutidos e comparados, a fim de avaliar o desempenho do sistema desenvolvido. / [en] This dissertation presents the design and development of an obstacle detection and avoidance system for unmanned aerial vehicles, implemented by a neuro-fuzzy controller. In this context, this work presents a theoretical review of unmanned aerial vehicles, the applicable Brazilian legislation, obstacle detection methods, fuzzy logic and neural networks. The developed controller was implemented in order to mimic the actions taken by a human operator, aiming at avoiding obstacles found in the navigation path of the UAV. Inference rules were established based on consultation with specialists in the field and the weights adjusted by neural networks. The decisionmaking process takes into account information collected by a multichannel Lidar and ultrasonic sensors embedded in the UAV. In turn, the developed algorithm was embedded in a commercial flight controller. The complete quadricopter system is detailed, highlighting the key features of all sensors and the flight controller. The results of computational simulations and experimental tests are presented, discussed and compared, in order to evaluate the performance of the developed system.
|
49 |
Um sistema de vis?o para navega??o robusta de uma plataforma rob?tica semi-aut?nomaBezerra, Jo?o Paulo de Ara?jo 19 May 2006 (has links)
Made available in DSpace on 2014-12-17T14:55:03Z (GMT). No. of bitstreams: 1
JoaoPAB.pdf: 1121359 bytes, checksum: 0140c2cdd16358b4d1f4ee69b79c5b3c (MD5)
Previous issue date: 2006-05-19 / Large efforts have been maden by the scientific community on tasks involving locomotion of mobile robots. To execute this kind of task, we must develop to the robot the ability of navigation through the environment in a safe way, that is, without collisions with the objects. In order to perform this, it is necessary to implement strategies that makes possible to detect obstacles. In this work, we deal with this problem by proposing a system that is able to collect sensory information and to estimate the possibility for obstacles to occur in the mobile robot path. Stereo cameras positioned in parallel to each other in a structure coupled to the robot are employed as the main sensory device, making possible the generation of a disparity map. Code optimizations and a strategy for data reduction and abstraction are applied to the images,
resulting in a substantial gain in the execution time. This makes possible to the high level decision processes to execute obstacle deviation in real time. This system can be employed in situations where the robot is remotely operated, as well as in situations where it depends only on itself to generate trajectories (the autonomous case) / Grandes esfor?os t?m sido despendidos pela comunidade cient?fica em tarefas de locomo??o de rob?s m?veis. Para a execu??o deste tipo de tarefa, devemos desenvolver no rob? a habilidade de navega??o no ambiente de forma segura, isto ?, sem que haja colis?es contra objetos. Para que isto seja realizado, faz-se necess?rio implementar estrat?gias que possibilitem a detec??o de obst?culos. Neste trabalho, abordamos este problema, propondo um sistema
capaz de coletar informa??es sensoriais e estimar a possibilidade de ocorr?ncia de obst?culos no percurso de um rob? m?vel. C?meras est?reo, posicionadas paralelamente uma ? outra, numa estrutura acoplada ao rob?, s?o empregadas como o dispositivo sensorial principal, pos-
sibilitando a gera??o de um mapa de disparidades. Otimiza??es de c?digo e uma estrat?gia de redu??o e abstra??o de dados s?o aplicadas ?s imagens, resultando num ganho substancial no tempo de execu??o. Isto torna poss?vel aos processos de decis?o de mais alto n?vel executar
o desvio de obst?culos em tempo real. Este sistema pode ser empregado em situa??es onde o rob? seja tele-operado, bem como em situa??es onde ele dependa de si pr?prio para gerar trajet?rias (no caso aut?nomo)
|
50 |
Fusão de informações obtidas a partir de múltiplas imagens visando à navegação autônoma de veículos inteligentes em abiente agrícola / Data fusion obtained from multiple images aiming the navigation of autonomous intelligent vehicles in agricultural environmentVítor Manha Utino 08 April 2015 (has links)
Este trabalho apresenta um sistema de auxilio à navegação autônoma para veículos terrestres com foco em ambientes estruturados em um cenário agrícola. É gerada a estimativa das posições dos obstáculos baseado na fusão das detecções provenientes do processamento dos dados de duas câmeras, uma estéreo e outra térmica. Foram desenvolvidos três módulos de detecção de obstáculos. O primeiro módulo utiliza imagens monoculares da câmera estéreo para detectar novidades no ambiente através da comparação do estado atual com o estado anterior. O segundo módulo utiliza a técnica Stixel para delimitar os obstáculos acima do plano do chão. Por fim, o terceiro módulo utiliza as imagens térmicas para encontrar assinaturas que evidenciem a presença de obstáculo. Os módulos de detecção são fundidos utilizando a Teoria de Dempster-Shafer que fornece a estimativa da presença de obstáculos no ambiente. Os experimentos foram executados em ambiente agrícola real. Foi executada a validação do sistema em cenários bem iluminados, com terreno irregular e com obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas três módulos de detecção com metodologias que não tem por objetivo priorizar a confirmação de obstáculos, mas sim a busca de novos obstáculos. Nesta dissertação são apresentados os principais componentes de um sistema de detecção de obstáculos e as etapas necessárias para a sua concepção, assim como resultados de experimentos com o uso de um veículo real. / This work presents a support system to the autonomous navigation for ground vehicles with focus on structured environments in an agricultural scenario. The estimated obstacle positions are generated based on the fusion of the detections from the processing of data from two cameras, one stereo and other thermal. Three modules obstacle detection have been developed. The first module uses monocular images of the stereo camera to detect novelties in the environment by comparing the current state with the previous state. The second module uses Stixel technique to delimit the obstacles above the ground plane. Finally, the third module uses thermal images to find signatures that reveal the presence of obstacle. The detection modules are fused using the Dempster-Shafer theory that provides an estimate of the presence of obstacles in the environment. The experiments were executed in real agricultural environment. System validation was performed in well-lit scenarios, with uneven terrain and different obstacles. The system showed satisfactory performance considering the use of an approach based on only three detection modules with methods that do not prioritize obstacle confirmation, but the search for new ones. This dissertation presents the main components of an obstacle detection system and the necessary steps for its design as well as results of experiments with the use of a real vehicle.
|
Page generated in 0.0797 seconds