• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 122
  • 122
  • 50
  • 48
  • 29
  • 26
  • 26
  • 26
  • 25
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelling the Level of Trust in a Cooperative Automated Vehicle Control System

Rosenstatter, Thomas January 2016 (has links)
Vehicle-to-Vehicle communication is the key technology for achieving increased perception for automated vehicles where the communication allows virtual sensing with the use of sensors placed in other vehicles. In addition, this technology also allows recognising objects that are out-of-sight. This thesis presents a Trust System that allows a vehicle to make more reliable and robust decisions. The system evaluates the current situation and generates a Trust Index indicating the level of trust in the environment, the ego vehicle, and the other vehicles. Current research focuses on securing the communication between the vehicles themselves, but does not verify the content of the received data on a system level. The proposed Trust System evaluates the received data according to sensor accuracy, behaviour of other vehicles, and the perception of the local environment. The results show that the proposed method is capable of correctly identifying various situations and discusses how the Trust Index can be used to make more robust decisions.
12

Situational awareness in autonomous vehicles : learning to read the road

Mathibela, Bonolo January 2014 (has links)
This thesis is concerned with the problem of situational awareness in autonomous vehicles. In this context, situational awareness refers to the ability of an autonomous vehicle to perceive the road layout ahead, interpret the implied semantics and gain an awareness of its surrounding - thus reading the road ahead. Autonomous vehicles require a high level of situational awareness in order to operate safely and efficiently in real-world dynamic environments. A system is therefore needed that is able to model the expected road layout in terms of semantics, both under normal and roadwork conditions. This thesis takes a three-pronged approach to this problem: Firstly, we consider reading the road surface. This is formulated in terms of probabilistic road marking classification and interpretation. We then derive the road boundaries using only a 2D laser and algorithms based on geometric priors from Highway Traffic Engineering principles. Secondly, we consider reading the road scene. Here, we formulate a roadwork scene recognition framework based on opponent colour vision in humans. Finally, we provide a data representation for situational awareness that unifies reading the road surface and reading the road scene. This thesis therefore frames situational awareness in autonomous vehicles in terms of both static and dynamic road semantics - and detailed formulations and algorithms are discussed. We test our algorithms on several benchmarking datasets collected using our autonomous vehicle on both rural and urban roads. The results illustrate that our road boundary estimation, road marking classification, and roadwork scene recognition frameworks allow autonomous vehicles to truly and meaningfully read the semantics of the road ahead, thus gaining a valuable sense of situational awareness even at challenging layouts, roadwork sites, and along unknown roadways.
13

Efficient supervision for robot learning via imitation, simulation, and adaptation

Wulfmeier, Markus January 2018 (has links)
In order to enable more widespread application of robots, we are required to reduce the human effort for the introduction of existing robotic platforms to new environments and tasks. In this thesis, we identify three complementary strategies to address this challenge, via the use of imitation learning, domain adaptation, and transfer learning based on simulations. The overall work strives to reduce the effort of generating training data by employing inexpensively obtainable labels and by transferring information between different domains with deviating underlying properties. Imitation learning enables a straightforward way for untrained personnel to teach robots to perform tasks by providing demonstrations, which represent a comparably inexpensive source of supervision. We develop a scalable approach to identify the preferences underlying demonstration data via the framework of inverse reinforcement learning. The method enables integration of the extracted preferences as cost maps into existing motion planning systems. We further incorporate prior domain knowledge and demonstrate that the approach outperforms the baselines including manually crafted cost functions. In addition to employing low-cost labels from demonstration, we investigate the adaptation of models to domains without available supervisory information. Specifically, the challenge of appearance changes in outdoor robotics such as illumination and weather shifts is addressed using an adversarial domain adaptation approach. A principal advantage of the method over prior work is the straightforwardness of adapting arbitrary, state-of-the-art neural network architectures. Finally, we demonstrate performance benefits of the method for semantic segmentation of drivable terrain. Our last contribution focuses on simulation to real world transfer learning, where the characteristic differences are not only regarding the visual appearance but the underlying system dynamics. Our work aims at parallel training in both systems and mutual guidance via auxiliary alignment rewards to accelerate training for real world systems. The approach is shown to outperform various baselines as well as a unilateral alignment variant.
14

Préparation à la conduite automatisée en Réalité Mixte / Get ready for automated driving with Mixed Reality

Sportillo, Daniele 19 April 2019 (has links)
L'automatisation de la conduite est un processus en cours qui est en train de changer radicalement la façon dont les gens voyagent et passent du temps dans leur voiture pendant leurs déplacements. Les véhicules conditionnellement automatisés libèrent les conducteurs humains de la surveillance et de la supervision du système et de l'environnement de conduite, leur permettant d'effectuer des activités secondaires pendant la conduite, mais requièrent qu’ils puissent reprendre la tâche de conduite si nécessaire. Pour les conducteurs, il est essentiel de comprendre les capacités et les limites du système, d’en reconnaître les notifications et d'interagir de manière adéquate avec le véhicule pour assurer leur propre sécurité et celle des autres usagers de la route. À cause de la diversité des situations de conduite que le conducteur peut rencontrer, les programmes traditionnels de formation peuvent ne pas être suffisants pour assurer une compréhension efficace de l'interaction entre le conducteur humain et le véhicule pendant les transitions de contrôle. Il est donc nécessaire de permettre aux conducteurs de vivre ces situations avant leur première utilisation du véhicule. Dans ce contexte, la Réalité Mixte constitue un outil d'apprentissage et d'évaluation des compétences potentiellement efficace qui permettrait aux conducteurs de se familiariser avec le véhicule automatisé et d'interagir avec le nouvel équipement dans un environnement sans risque. Si jusqu'à il y a quelques années, les plates-formes de Réalité Mixte étaient destinées à un public de niche, la démocratisation et la diffusion à grande échelle des dispositifs immersifs ont rendu leur adoption plus accessible en termes de coût, de facilité de mise en œuvre et de configuration. L'objectif de cette thèse est d'étudier le rôle de la réalité mixte dans l'acquisition de compétences pour l'interaction d'un conducteur avec un véhicule conditionnellement automatisé. En particulier, nous avons exploré le rôle de l'immersion dans le continuum de la réalité mixte en étudiant différentes combinaisons d'espaces de visualisation et de manipulation et la correspondance entre le monde virtuel et le monde réel. Du fait des contraintes industrielles, nous avons limité les candidats possibles à des systèmes légers portables, peu chers et facilement accessibles; et avons analysé l’impact des incohérences sensorimotrices que ces systèmes peuvent provoquer sur la réalisation des activités dans l’environnement virtuel. À partir de ces analyses, nous avons conçu un programme de formation visant l'acquisition des compétences, des règles et des connaissances nécessaires à l'utilisation d'un véhicule conditionnellement automatisé. Nous avons proposé des scénarios routiers simulés de plus en plus complexes pour permettre aux apprenants d’interagir avec ce type de véhicules dans différentes situations de conduite. Des études expérimentales ont été menées afin de déterminer l'impact de l'immersion sur l'apprentissage, la pertinence du programme de formation conçu et, à plus grande échelle, de valider l'efficacité de l'ensemble des plateformes de formation par des mesures subjectives et objectives. Le transfert de compétences de l'environnement de formation à la situation réelle a été évalué par des essais sur simulateurs de conduite haut de gamme et sur des véhicules réels sur la voie publique. / Driving automation is an ongoing process that is radically changing how people travel and spend time in their cars during journeys. Conditionally automated vehicles free human drivers from the monitoring and supervision of the system and driving environment, allowing them to perform secondary activities during automated driving, but requiring them to resume the driving task if necessary. For the drivers, understanding the system’s capabilities and limits, recognizing the system’s notifications, and interacting with the vehicle in the appropriate way is crucial to ensuring their own safety and that of other road users. Because of the variety of unfamiliar driving situations that the driver may encounter, traditional handover and training programs may not be sufficient to ensure an effective understanding of the interaction between the human driver and the vehicle during transitions of control. Thus, there is the need to let drivers experience these situations before their first ride. In this context, Mixed Reality provides potentially valuable learning and skill assessment tools which would allow drivers to familiarize themselves with the automated vehicle and interact with the novel equipment involved in a risk-free environment. If until a few years ago these platforms were destined to a niche audience, the democratization and the large-scale spread of immersive devices since then has made their adoption more accessible in terms of cost, ease of implementation, and setup. The objective of this thesis is to investigate the role of Mixed Reality in the acquisition of competences needed for a driver’s interaction with a conditionally automated vehicle. In particular, we explored the role of immersion along the Mixed Reality continuum by investigating different combinations of visualization and manipulation spaces and the correspondence between the virtual and the real world. For industrial constraints, we restricted the possible candidates to light systems that are portable, cost-effective and accessible; we thus analyzed the impact of the sensorimotor incoherences that these systems may cause on the execution of tasks in the virtual environment. Starting from these analyses, we designed a training program aimed at the acquisition of skills, rules and knowledge necessary to operate a conditionally automated vehicle. In addition, we proposed simulated road scenarios with increasing complexity to suggest what it feels like to be a driver at this level of automation in different driving situations. Experimental user studies were conducted in order to determine the impact of immersion on learning and the pertinence of the designed training program and, on a larger scale, to validate the effectiveness of the entire training platform with self-reported and objective measures. Furthermore, the transfer of skills from the training environment to the real situation was assessed with test drives using both high-end driving simulators and actual vehicles on public roads.
15

Sim-to-Real Transfer for Autonomous Navigation

Müller, Matthias 05 1900 (has links)
This work investigates the problem of transfer from simulation to the real world in the context of autonomous navigation. To this end, we first present a photo-realistic training and evaluation simulator (Sim4CV)* which enables several applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator features cars and unmanned aerial vehicles (UAVs) with a realistic physics simulation and diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. Using the insights gained from aerial object tracking, we find that current object trackers are either too slow or inaccurate for online tracking from an UAV. In addition, we find that in particular background clutter, fast motion and occlusion are preventing fast trackers such as correlation filter (CF) trackers to perform better. To address this issue we propose a novel and general framework that can be applied to CF trackers in order incorporate context. As a result the learned filter is more robust to drift due to the aforementioned tracking challenges. We show that our framework can improve several CF trackers by a large margin while maintaining a very high frame rate. For the application of autonomous driving, we train a driving policy that drives very well in simulation. However, while our simulator is photo-realistic there still exists a virtual-reality gap. We show how this gap can be reduced via modularity and abstraction in the driving policy. More specifically, we split the driving task into several modules namely perception, driving policy and control. This simplifies the transfer significantly and we show how a driving policy that was only trained in simulation can be transferred to a robotic vehicle in the physical world directly. Lastly, we investigate the application of UAV racing which has emerged as a modern sport recently. We propose a controller fusion network (CFN) which allows fusing multiple imperfect controllers; the result is a navigation policy that outperforms each one of them. Further, we embed this CFN into a modular network architecture similar to the one for driving, in order to decouple perception and control. We use our photo-realistic simulation environment to demonstrate how navigation policies can be transferred to different environment conditions by this network modularity.
16

Path Planning and Robust Control of Autonomous Vehicles

Zhu, Sheng January 2020 (has links)
No description available.
17

Sensor Fusion for 3D Object Detection for Autonomous Vehicles

Massoud, Yahya 14 October 2021 (has links)
Thanks to the major advancements in hardware and computational power, sensor technology, and artificial intelligence, the race for fully autonomous driving systems is heating up. With a countless number of challenging conditions and driving scenarios, researchers are tackling the most challenging problems in driverless cars. One of the most critical components is the perception module, which enables an autonomous vehicle to "see" and "understand" its surrounding environment. Given that modern vehicles can have large number of sensors and available data streams, this thesis presents a deep learning-based framework that leverages multimodal data – i.e. sensor fusion, to perform the task of 3D object detection and localization. We provide an extensive review of the advancements of deep learning-based methods in computer vision, specifically in 2D and 3D object detection tasks. We also study the progress of the literature in both single-sensor and multi-sensor data fusion techniques. Furthermore, we present an in-depth explanation of our proposed approach that performs sensor fusion using input streams from LiDAR and Camera sensors, aiming to simultaneously perform 2D, 3D, and Bird’s Eye View detection. Our experiments highlight the importance of learnable data fusion mechanisms and multi-task learning, the impact of different CNN design decisions, speed-accuracy tradeoffs, and ways to deal with overfitting in multi-sensor data fusion frameworks.
18

Analysis of comparative filter algorithm effect on an IMU

Åkerblom Svensson, Johan, Gullberg Carlsson, Joakim January 2021 (has links)
An IMU is a sensor with many differing use cases, it makes use of an accelerometer, gyroscope and sometimes a magnetometer. One of the biggest problems with IMU sensors is the effect vibrations can have on their data. The reason for this study is to find a solution to this problem by filtering the data. The tests for this study were conducted in cooperation with Husqvarna using two of their automowers. The tests were made by running the automowers across different surfaces and recording the IMU data. To find filters for the IMU data a comprehensive literature survey was conducted to find suitable methods to filter out vibrations. The two filters selected for further testing were the complementary filter and the LMS filter. When the tests had been run all the data was added to data sheets where it could be analyzed and have the filters added to the data. From the gathered data the data spikes were clearly visible and were more than enough to trigger the mower's emergency stop and need to be manually reset. The vibrations were too irregular to filter using the LMS filter since it requires a known signal to filter against. Hence only the complementary filter was implemented fully. With the complementary filter these vibrations can be minimized and brought well below the level required to trigger an emergency stop. With a high filter weight constant such as 0.98, the margin of error from vibrations can be brought down to +- 1 degrees as the lowest and +- 4,6 degrees as highest depending on the surface and automower under testing. The main advantage with using the complementary filter is that it only requires one weight constant to adjust the filter intensity making it easy to use. The one disadvantage is that the higher the weight constant is the more delay there is on the data.
19

Návrh a realizace elektroniky a software autonomního mobilního robotu / Electronics circuit board and control software design for autonomous mobile robot

Meindl, Jan January 2017 (has links)
The master's thesis deals with the design and realization of embedded control system and software of the autonomous mobile robot DACEP. The research section focuses on the selection of sensory equipment. Moreover, the design of the embedded control system and the communication interface between this system and the master PC is described in detail, followed by the design of localization and navigation software that uses ROS framework. The section is written as instructive as possible for the development of robots of similar construction. Finally the development of a graphical interface for robot diagnostics and remote control is depicted.
20

Automated Disconnected Towing System

Yaqin Wang (8797037) 06 May 2020 (has links)
<div><div><div><p>Towing capacity affects a vehicle’s towing ability and it is usually costly to buy or even rent a vehicle that can tow certain amount of weight. A widely swaying towing trailer is one of the main causes for accidents that involves towing trailers. This study propose an affordable automated disconnected towing system (ADTS) that does not require physical connection between leading vehicle and the trailer vehicle by only using a computer vision system. The ADTS contains two main parts: a leading vehicle which can perform lane detection and a trailer vehicle which can automatically follow the leading vehicle by detecting the license plate of the leading vehicle. The trailer vehicle can adjust its speed according to the distance from the leading vehicle.</p></div></div></div>

Page generated in 0.0313 seconds