1 |
Recognition of the immediate driving environmentWilson, Malcolm Baxter January 2002 (has links)
No description available.
|
2 |
A Mixed-Reality Platform for Robotics and Intelligent VehiclesGrünwald, Norbert January 2012 (has links)
Mixed Reality is the combination of the real world with a virtual one. In robotics thisopens many opportunities to improve the existing ways of development and testing. Thetools that Mixed Reality gives us, can speed up the development process and increasesafety during the testing stages. They can make prototyping faster and cheaper, and canboost the development and debugging process thanks to visualization and new opportunitiesfor automated testing.In this thesis the steps to build a working prototype demonstrator of a Mixed Realitysystem are covered. From selecting the required components, over integrating them intofunctional subsystems, to building a fully working demonstration system.The demonstrator uses optical tracking to gather information about the real world environment.It incorporates this data into a virtual representation of the world. This allowsthe simulation to let virtual and physical objects interact with each other. The results ofthe simulation are then visualized back into the real world.The presented system has been implemented and successfully tested at the HalmstadUniversity.
|
3 |
Uma contribuição ao desenvolvimento de sistemas baseados em visão estéreo para o auxílio a navegação de robôs móveis e veículos inteligentes / A contribution to the development of stereo vision based to aid the mobile robot navigation and intelligent vehiclesFernandes, Leandro Carlos 04 December 2014 (has links)
Esta tese visa apresentar uma contribuição ao desenvolvimento de sistemas computacionais, baseados principalmente em visão computacional, usados para o auxílio a navegação de robôs móveis e veículos inteligentes. Inicialmente, buscou-se apresentar uma proposição de uma arquitetura de um sistema computacional para veículos inteligente que permita a construção de sistemas que sirvam tanto para o apoio ao motorista, auxiliando-o em sua forma de condução, quanto para o controle autônomo, proporcionando maior segurança e autonomia do tráfego de veículos em meio urbano, em rodovias e inclusive no meio rural. Esta arquitetura vem sendo aperfeiçoada e validada junto as plataformas CaRINA I e CaRINA II (Carro Robótico Inteligente para Navegação Autônoma), que também foram alvo de desenvolvimentos e pesquisas junto a esta tese, permitindo também a experimentação prática dos conceitos propostos nesta tese. Neste contexto do desenvolvimento de veículos inteligentes e autônomos, o uso de sensores para a percepção 3D do ambiente possui um papel muito importante, permitindo o desvio de obstáculos e navegação autônoma, onde a adoção de sensores de menor custo tem sido buscada a m de viabilizar aplicações comerciais. As câmeras estéreo são dispositivos que se enquadram nestes requisitos de custo e percepção 3D, destacando-se como sendo o foco da proposta de um novo método automático de calibração apresentado nesta tese. O método proposto permite estimar os parâmetros extrínsecos de um sistema de câmeras estéreo através de um processo evolutivo que considera apenas a coerência e a qualidade de alguns elementos do cenário quanto ao mapa de profundidade. Esta proposta apresenta uma forma original de calibração que permite a um usuário, sem grandes conhecimentos sobre visão estéreo, ajustar o sistema de câmeras para novas configurações e necessidades. O sistema proposto foi testado com imagens reais, obtendo resultados bastante promissores, se comparado aos métodos tradicionais de calibração de câmeras estéreo que fazem uso de um processo interativo de estimação dos parâmetros através da apresentação e uso de um padrão xadrez. Este método apresenta-se como uma abordagem promissora para realizar a fusão dos dados de câmeras e sensores, permitindo o ajuste das matrizes de transformação (parâmetros extrínsecos do sistema), a m de obter uma referência única onde são representados e agrupados os dados vindos dos diferentes sensores. / This thesis aims to provide a contribution to computer systems development based on computer vision used to aid the navigation of mobile robots and intelligent vehicles. Initially, we propose a computer system architecture for intelligent vehicles where intention is to support both the driver, helping him in driving way; and the autonomous control, providing greater security and autonomy of vehicular traffic in urban areas, on highways and even in rural areas. This architecture has been validated and improved with CaRINA I and CaRINA II platforms development, which were also subject of this thesis and allowed the practical experimentation of concepts proposed. In context of intelligent autonomous vehicles, the use of sensors that provides a 3D environment perception has a very important role to enable obstacle avoidance and autonomous navigation. Therefor the adoption of lower cost sensors have been sought in order to facilitate commercial applications. The stereo cameras are devices that fit these both requirements (cost and 3D perception), standing out as focus of the proposal for a new automatic calibration method presented in this thesis. The proposed method allows to estimate the extrinsic parameters of a stereo camera system through an evolutionary process that considers only the consistency and the quality of some elements of the scenario as to the depth map. This proposal presents a unique form of calibration that allows a user without much knowledge of stereo vision, adjust the camera system for new settings and needs. The system was tested with real images, obtaining very promising results as compared to traditional methods of calibration of stereo cameras that use an iterative process of parameter estimation through the presentation and use of a checkerboard pattern. This method offers a promising approach to achieve the fusion of the data from cameras and sensors, allowing adjustment of transformation matrices (extrinsic system parameters) in order to obtain a single reference in which they are grouped together and represented the data from the different sensors.
|
4 |
Detecção e rastreamento de obstáculos em ambientes urbanos utilizando visão estéreo / Detection and tracking of obstacles in urban environments using stereo visionRidel, Daniela Alves 30 June 2016 (has links)
Segundo relatório disponibilizado pela World Health Organization (WHO) (WHO, 2015), 1,3 milhões de pessoas morrem todos os anos no mundo devido à acidentes de trânsito. Veículos inteligentes se mostram como uma proeminente solução para reduzir esse drástico número. Por isso, diversos grupos de pesquisa no mundo têm concentrado esforços para o desenvolvimento de pesquisa que viabilize o desenvolvimento desse tipo de tecnologia. Diversos são os requisitos necessários para que um veículo possa circular de forma completamente autônoma. Localização, mapeamento, reconhecimento de semáforos e placas de trânsito são apenas alguns dentre tantos. Para que um veículo trafegue nas vias de forma segura, ele precisa saber onde estão os agentes que coabitam o mesmo espaço. Depois que esses agentes são detectados é necessário predizer suas movimentações de forma a reduzir os riscos de colisão. Neste projeto propôs-se a construção de um sistema que visa detectar agentes (obstáculos) e realizar o rastreamento deles para estimar suas velocidades e localizações enquanto estiverem no campo de visão do veículo autônomo, assim possibilitando realizar o cálculo da chance de colisão de cada um desses obstáculos com o veículo autônomo. O sistema utiliza unicamente a informação provida por uma câmera estereoscópica. Os pontos da cena são agrupados utilizando a informação da 24-vizinhança, disparidade e um valor que corresponde a chance de fazerem parte de um obstáculo. Após o agrupamento, cada grupo é dado como um possível obstáculo, após checar a consistência desses obstáculos por dois frames consecutivos, o grupo, agora considerado um obstáculo passa a ser rastreado utilizando filtro de Kalman (WELCH; BISHOP, 1995) e para checar a correspondência de obstáculos ao longo de toda a sequência é utilizado o algoritmo de Munkres (MUNKRES, 1957). A detecção e o rastreamento foram avaliados quantitativamente e qualitativamente utilizando dados coletados no Campus II da USP de São Carlos, bem como o conjunto de dados KITTI (GEIGER; LENZ; URTASUN, 2012). Os resultados demonstram a eficiência do algoritmo tanto na detecção dos obstáculos como no rastreamento dos mesmos. / According to a report provided by the WHO (World Health Organization) in 2015 (WHO, 2015), 1.3 million people die every year worldwide due to traffic accidents. Intelligent vehicles appear as a prominent solution to reduce this number. Many research groups in the world have been focussing efforts on the development of research in order to enable the development of such technology. There are several requirements for a vehicle be completely autonomous on the roads. Location, mapping, recognition of traffic lights and traffic signs are just a few among many. For safety the vehicle needs to detect all the other elements that are present in the same environment and to estimate their velocity in order to know where they are planning to go to avoid any kind of collision. This project proposes a system to detect obstacles and perform their tracking to estimate their speeds and locations enabling the calculation of the chance of collision of each of these obstacles with the autonomous vehicle. The system only uses the information provided by a stereoscopic camera. The points in the scene are clustered using the 24-neighborhood information, disparity and a value related to the chance of it being part of an obstacle. After the clustering, each cluster is considered a possible obstacle, when the consistence is checked in two frames the cluster becames an obstacle and starts being tracked using Kalman filter (WELCH; BISHOP, 1995), to match obstacles being tracked in the whole sequence the Munkres algorithm (MUNKRES, 1957) is used. The detection and tracking were evaluated qualitatively and quantitatively using data collected in the Campus II of USP in São Carlos and data from KITTI dataset (GEIGER; LENZ; URTASUN, 2012). The results show the algorithms efficiency in obstacle detection and tracking.
|
5 |
Uma contribuição ao desenvolvimento de sistemas baseados em visão estéreo para o auxílio a navegação de robôs móveis e veículos inteligentes / A contribution to the development of stereo vision based to aid the mobile robot navigation and intelligent vehiclesLeandro Carlos Fernandes 04 December 2014 (has links)
Esta tese visa apresentar uma contribuição ao desenvolvimento de sistemas computacionais, baseados principalmente em visão computacional, usados para o auxílio a navegação de robôs móveis e veículos inteligentes. Inicialmente, buscou-se apresentar uma proposição de uma arquitetura de um sistema computacional para veículos inteligente que permita a construção de sistemas que sirvam tanto para o apoio ao motorista, auxiliando-o em sua forma de condução, quanto para o controle autônomo, proporcionando maior segurança e autonomia do tráfego de veículos em meio urbano, em rodovias e inclusive no meio rural. Esta arquitetura vem sendo aperfeiçoada e validada junto as plataformas CaRINA I e CaRINA II (Carro Robótico Inteligente para Navegação Autônoma), que também foram alvo de desenvolvimentos e pesquisas junto a esta tese, permitindo também a experimentação prática dos conceitos propostos nesta tese. Neste contexto do desenvolvimento de veículos inteligentes e autônomos, o uso de sensores para a percepção 3D do ambiente possui um papel muito importante, permitindo o desvio de obstáculos e navegação autônoma, onde a adoção de sensores de menor custo tem sido buscada a m de viabilizar aplicações comerciais. As câmeras estéreo são dispositivos que se enquadram nestes requisitos de custo e percepção 3D, destacando-se como sendo o foco da proposta de um novo método automático de calibração apresentado nesta tese. O método proposto permite estimar os parâmetros extrínsecos de um sistema de câmeras estéreo através de um processo evolutivo que considera apenas a coerência e a qualidade de alguns elementos do cenário quanto ao mapa de profundidade. Esta proposta apresenta uma forma original de calibração que permite a um usuário, sem grandes conhecimentos sobre visão estéreo, ajustar o sistema de câmeras para novas configurações e necessidades. O sistema proposto foi testado com imagens reais, obtendo resultados bastante promissores, se comparado aos métodos tradicionais de calibração de câmeras estéreo que fazem uso de um processo interativo de estimação dos parâmetros através da apresentação e uso de um padrão xadrez. Este método apresenta-se como uma abordagem promissora para realizar a fusão dos dados de câmeras e sensores, permitindo o ajuste das matrizes de transformação (parâmetros extrínsecos do sistema), a m de obter uma referência única onde são representados e agrupados os dados vindos dos diferentes sensores. / This thesis aims to provide a contribution to computer systems development based on computer vision used to aid the navigation of mobile robots and intelligent vehicles. Initially, we propose a computer system architecture for intelligent vehicles where intention is to support both the driver, helping him in driving way; and the autonomous control, providing greater security and autonomy of vehicular traffic in urban areas, on highways and even in rural areas. This architecture has been validated and improved with CaRINA I and CaRINA II platforms development, which were also subject of this thesis and allowed the practical experimentation of concepts proposed. In context of intelligent autonomous vehicles, the use of sensors that provides a 3D environment perception has a very important role to enable obstacle avoidance and autonomous navigation. Therefor the adoption of lower cost sensors have been sought in order to facilitate commercial applications. The stereo cameras are devices that fit these both requirements (cost and 3D perception), standing out as focus of the proposal for a new automatic calibration method presented in this thesis. The proposed method allows to estimate the extrinsic parameters of a stereo camera system through an evolutionary process that considers only the consistency and the quality of some elements of the scenario as to the depth map. This proposal presents a unique form of calibration that allows a user without much knowledge of stereo vision, adjust the camera system for new settings and needs. The system was tested with real images, obtaining very promising results as compared to traditional methods of calibration of stereo cameras that use an iterative process of parameter estimation through the presentation and use of a checkerboard pattern. This method offers a promising approach to achieve the fusion of the data from cameras and sensors, allowing adjustment of transformation matrices (extrinsic system parameters) in order to obtain a single reference in which they are grouped together and represented the data from the different sensors.
|
6 |
Traffic Light Status Detection Using Movement Patterns of VehiclesJanuary 2016 (has links)
abstract: Traditional methods for detecting the status of traffic lights used in autonomous vehicles may be susceptible to errors, which is troublesome in a safety-critical environment. In the case of vision-based recognition methods, failures may arise due to disturbances in the environment such as occluded views or poor lighting conditions. Some methods also depend on high-precision meta-data which is not always available. This thesis proposes a complementary detection approach based on an entirely new source of information: the movement patterns of other nearby vehicles. This approach is robust to traditional sources of error, and may serve as a viable supplemental detection method. Several different classification models are presented for inferring traffic light status based on these patterns. Their performance is evaluated over real-world and simulation data sets, resulting in up to 97% accuracy in each set. / Dissertation/Thesis / Masters Thesis Computer Science 2016
|
7 |
Detecção e rastreamento de obstáculos em ambientes urbanos utilizando visão estéreo / Detection and tracking of obstacles in urban environments using stereo visionDaniela Alves Ridel 30 June 2016 (has links)
Segundo relatório disponibilizado pela World Health Organization (WHO) (WHO, 2015), 1,3 milhões de pessoas morrem todos os anos no mundo devido à acidentes de trânsito. Veículos inteligentes se mostram como uma proeminente solução para reduzir esse drástico número. Por isso, diversos grupos de pesquisa no mundo têm concentrado esforços para o desenvolvimento de pesquisa que viabilize o desenvolvimento desse tipo de tecnologia. Diversos são os requisitos necessários para que um veículo possa circular de forma completamente autônoma. Localização, mapeamento, reconhecimento de semáforos e placas de trânsito são apenas alguns dentre tantos. Para que um veículo trafegue nas vias de forma segura, ele precisa saber onde estão os agentes que coabitam o mesmo espaço. Depois que esses agentes são detectados é necessário predizer suas movimentações de forma a reduzir os riscos de colisão. Neste projeto propôs-se a construção de um sistema que visa detectar agentes (obstáculos) e realizar o rastreamento deles para estimar suas velocidades e localizações enquanto estiverem no campo de visão do veículo autônomo, assim possibilitando realizar o cálculo da chance de colisão de cada um desses obstáculos com o veículo autônomo. O sistema utiliza unicamente a informação provida por uma câmera estereoscópica. Os pontos da cena são agrupados utilizando a informação da 24-vizinhança, disparidade e um valor que corresponde a chance de fazerem parte de um obstáculo. Após o agrupamento, cada grupo é dado como um possível obstáculo, após checar a consistência desses obstáculos por dois frames consecutivos, o grupo, agora considerado um obstáculo passa a ser rastreado utilizando filtro de Kalman (WELCH; BISHOP, 1995) e para checar a correspondência de obstáculos ao longo de toda a sequência é utilizado o algoritmo de Munkres (MUNKRES, 1957). A detecção e o rastreamento foram avaliados quantitativamente e qualitativamente utilizando dados coletados no Campus II da USP de São Carlos, bem como o conjunto de dados KITTI (GEIGER; LENZ; URTASUN, 2012). Os resultados demonstram a eficiência do algoritmo tanto na detecção dos obstáculos como no rastreamento dos mesmos. / According to a report provided by the WHO (World Health Organization) in 2015 (WHO, 2015), 1.3 million people die every year worldwide due to traffic accidents. Intelligent vehicles appear as a prominent solution to reduce this number. Many research groups in the world have been focussing efforts on the development of research in order to enable the development of such technology. There are several requirements for a vehicle be completely autonomous on the roads. Location, mapping, recognition of traffic lights and traffic signs are just a few among many. For safety the vehicle needs to detect all the other elements that are present in the same environment and to estimate their velocity in order to know where they are planning to go to avoid any kind of collision. This project proposes a system to detect obstacles and perform their tracking to estimate their speeds and locations enabling the calculation of the chance of collision of each of these obstacles with the autonomous vehicle. The system only uses the information provided by a stereoscopic camera. The points in the scene are clustered using the 24-neighborhood information, disparity and a value related to the chance of it being part of an obstacle. After the clustering, each cluster is considered a possible obstacle, when the consistence is checked in two frames the cluster becames an obstacle and starts being tracked using Kalman filter (WELCH; BISHOP, 1995), to match obstacles being tracked in the whole sequence the Munkres algorithm (MUNKRES, 1957) is used. The detection and tracking were evaluated qualitatively and quantitatively using data collected in the Campus II of USP in São Carlos and data from KITTI dataset (GEIGER; LENZ; URTASUN, 2012). The results show the algorithms efficiency in obstacle detection and tracking.
|
8 |
Recognition of driving objects in real time with computer vision and deep neural networksDominguez-Sanchez, Alex 19 December 2018 (has links)
Traffic is one of the key elements nowadays that affect our lives more or less in a every day basis. Traffic is present when we go to work, is on week ends, on holidays, even if we go shopping in our neighborhood, traffic is present. Every year, we can see on TV how after a bank holiday (United Kingdom public holiday), the traffic incidents are a figure that all TV news are obliged to report. If we see the accident causes, the ones caused by mechanical failures are always a minimal part, being human causes the majority of times. In our society, the tasks where technology help us to complete those with 100% success are very frequent. We tune our TVs for digital broadcasting, robots complete thousands of mechanical tasks, we work with computers that do not crash for months or years, even weather forecasting is more accurate than ever. All those aspects in our lives are successfully carried out on a daily basis. Nowadays, in traffic and road transport, we are starting a new era where driving a vehicle can be assisted partially or totally, parking our car can be done automatically, or even detecting a child in the middle of the road can be automatically done instead of leaving those tasks to the prone-to-fail human. The same features that today amaze us (as in the past did the TV broadcast in colour), in the future, those safety features will be a common thing in our cars. With more and more vehicles in the roads, cars, motorbikes, bicycles, more people in our cities and the necessity to be in a constant move, our society needs a zero-car-accidents conception, as we have now the technology to achieve it. Computer Vision is the computer science field that since the 80s has been obsessed with emulating the way human see and perceive their environment and react to it in an intelligent way. One decade ago, detecting complex objects in a scene as a human was impossible. All we could do was to detect the edges of an object, to threshold pixel values, detect motion, but nothing as the human capability to detect objects and identify their location. The advance in GPUs technology and the development of neural networks in the computer vision community has made those impossible tasks possible. GPUs now being a commodity item in our lives, the increase of amount and speed of RAM and the new and open models developed by experts in neural networks, make the task of detecting a child in the middle of a road a reality. In particular, detections with 99.79% probability are now possible, and the 100% probability goal is becoming a closer reality. In this thesis we have approached one of the key safety features in systems for traffic analysis, that is monitoring pedestrian crossing. After researching the state-of-the-art in pedestrian movement detection, we have presented a novel strategy for such detection. By locating a fixed camera in a place where pedestrians move, we are able to detect the movement of those and their direction. We have achieved that task by using a mix of old and new methodologies. Having a fixed camera, allow us to remove the background of the scene, only leaving the moving pedestrians. Once we have this information, we have created a dataset of moving people and trained a CNN able to detect in which direction the pedestrian is moving. Another work that we present in this thesis is a traffic dataset and the research with state-of.the-art CNN models to detect objects in traffic environments. Crucial objects like cars, people, bikes, motorbikes, traffic signals, etc. have been grouped in a novel dataset to feed state-of-the-art CNNs and we carried out an analysis about their ability to detect and to locate those objects from the car point of view. Moreover, with the help of tracking techniques, we improved efficiency and robustness of the proposed method, creating a system capable of performing real-time object detection (urban objects). In this thesis, we also present a traffic sign dataset, which comprises 45 different traffic signs. This dataset has been used for training a traffic sign classifier that is used a second step of our urban object detector. Moreover, a very basic but important aspect in safety driving is to keep the vehicle within the allowed space in the road (within the limits of the road). SLAM techniques have been used in the past for such tasks, but we present an end-to-end approach, where a CNN model learns to keep the vehicle within the limits of the road, correcting the angle of the vehicle steering wheel. Finally, we show an implementation of the presented systems running on a custom-built electric car. A series of experiments were carried out on a real-life traffic environment for evaluating the steering angle prediction system and the urban object detector. A mechanical system was implemented on the car to enable automatic steering wheel control.
|
9 |
Vehicle sensor-based pedestrian position identification in V2V environmentHuang, Zhi 03 December 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis presents a method to accurately determine the location and amount of pedestrians detected by different vehicles equipped with a Pedestrian Autonomous Emergency Braking (PAEB) system, taking into consideration the inherent inaccuracy of the pedestrian sensing from these vehicles. In the thesis, a mathematical model of the pedestrian information generated by the PAEB system in the V2V network is developed. The Greedy-Medoids clustering algorithm and constrained hierarchical clustering are applied to recognize and reconstruct actual pedestrians, which enables a subject vehicle to approximate the number of the pedestrians and their estimated locations from a larger number of pedestrian alert messages received from many nearby vehicles through the V2V network and the subject vehicle itself. The proposed methods determines the possible number of actual pedestrians by grouping the nearby pedestrians information broadcasted by different vehicles and considers them as one pedestrian. Computer simulations illustrate the effectiveness and applicability of the proposed methods. The results are more integrated and accurate information for vehicle Autonomous Emergency Braking (AEB) systems to make better decisions earlier to avoid crashing into pedestrians.
|
10 |
Cooperative Perception in Autonomous Ground Vehicles using a Mobile Robot TestbedSridhar, Srivatsan 03 October 2017 (has links)
With connected and autonomous vehicles, no optimal standard or framework currently exists, outlining the right level of information sharing for cooperative autonomous driving. Cooperative Perception is proposed among vehicles, where every vehicle is transformed into a moving sensor platform that is capable of sharing information collected using its on-board sensors. This helps extend the line of sight and field of view of autonomous vehicles, which otherwise suffer from blind spots and occlusions. This increase in situational awareness promotes safe driving over a short range and improves traffic flow efficiency over a long range.
This thesis proposes a methodology for cooperative perception for autonomous vehicles over a short range. The problem of cooperative perception is broken down into sub-tasks of cooperative relative localization and map merging. Cooperative relative localization is achieved using visual and inertial sensors, where a computer-vision based camera relative pose estimation technique, augmented with position information, is used to provide a pose-fix that is subsequently updated by dead reckoning using an inertial sensor. Prior to map merging, a technique for object localization using a monocular camera is proposed that is based on the Inverse Perspective Mapping technique. A mobile multi-robot testbed was developed to emulate autonomous vehicles and the proposed method was implemented on the testbed to detect pedestrians and also to respond to the perceived hazard. Potential traffic scenarios where cooperative perception could prove crucial were tested and the results are presented in this thesis. / MS / Perception in Autonomous Vehicles is limited to the field of view of the vehicles’ onboard sensors and the environment may not be fully perceivable due to the presence of blind spots and occlusions. To overcome this limitation, Vehicle-to-Vehicle wireless communication could be leveraged to exchange locally sensed information among vehicles within the vicinity. Vehicles may share information about their own position, heading and velocity or go one step further and share information about their surroundings as well. This latter form of cooperative perception extends each vehicle’s field of view and line of sight, and helps increase situational awareness. The result is an increase in safety over a short range whereas communication over a long range could help improve traffic flow efficiency. This thesis proposes one such technique for cooperative perception over a short range. The system uses visual and inertial sensors to perform cooperative localization between two vehicles sharing a common field of view, which allows one vehicle to locate the other vehicle in its frame of reference. Subsequently, information about objects in the surroundings of one vehicle, localized using a visual sensor is relayed to the other vehicle through communication. A mobile multi-robot testbed was developed to emulate autonomous vehicles and to experimentally evaluate the proposed method through a series of driving scenario test cases in which cooperative perception could be effective and crucial to the safety and comfort of driving.
|
Page generated in 0.1597 seconds