• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 118
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 153
  • 153
  • 61
  • 51
  • 36
  • 34
  • 33
  • 33
  • 32
  • 29
  • 27
  • 26
  • 24
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A comparison of genetic algorithm and reinforcement learning for autonomous driving / En jämförelse mellan genetisk algoritm och förstärkningslärande för självkörande bilar

Xiang, Ziyi January 2019 (has links)
This paper compares two different methods, reinforcement learning and genetic algorithm for designing autonomous cars’ control system in a dynamic environment. The research problem could be formulated as such: How is the learning efficiency compared between reinforcement learning and genetic algorithm on autonomous navigation through a dynamic environment? In conclusion, the genetic algorithm outperforms the reinforcement learning on mean learning time, despite the fact that the prior shows a large variance, i.e. genetic algorithm provide a better learning efficiency. / I det här papperet jämförs två olika metoder, förstärkningsinlärning och genetisk algoritm för att designa autonoma bilar styrsystem i en dynamisk miljö. Forskningsproblemet kan formuleras som: Hur är inlärningseffektiviteten jämför mellan förstärkningsinlärning och genetisk algoritm på autonom navigering i en dynamisk miljö? Sammanfattningsvis, den genetisk algoritm överträffar förstärkningsinlärning på genomsnittlig inlärningstid, trots att den tidigare visar en stor varians, dvs genetisk algoritm, ger en bättre inlärningseffektivitet.
102

Handling Occlusion using Trajectory Prediction in Autonomous Vehicles / Ocklusionshantering med hjälp av banprediktion för självkörande fordon

Ljung, Mattias, Nagy, Bence January 2022 (has links)
Occlusion is a frequently occuring challenge in vision systems for autonomous driving. The density of objects in the field-of-view of the vehicle may be so high that some objects are only visible intermittently. It is therefore beneficial to investigate ways to predict the paths of objects under occlusion. In this thesis, we investigate whether trajectory prediction methods can be used to solve the occlusion prediction problem. We investigate two different types of approaches, one based on motion models, and one based on machine learning models. Furthermore, we investigate whether these two approaches can be fused to produce an even more reliable model. We evaluate our models on a pedestrian trajectory prediction dataset, an autonomous driving dataset, and a subset of the autonomous driving dataset that only includes validation examples of occlusion. The comparison of our different approaches shows that pure motion model-based methods perform the worst out of the three. On the other hand, machine learning-based models perform better, yet they require additional computing resources for training. Finally, the fused method performs the best on both the driving dataset and the occlusion data. Our results also indicate that trajectory prediction methods, both motion model-based and learning-based ones, can indeed accurately predict the path of occluded objects up to at least 3 seconds in the autonomous driving scenario.
103

Transformer Based Object Detection and Semantic Segmentation for Autonomous Driving

Hardebro, Mikaela, Jirskog, Elin January 2022 (has links)
The development of autonomous driving systems has been one of the most popular research areas in the 21st century. One key component of these kinds of systems is the ability to perceive and comprehend the physical world. Two techniques that address this are object detection and semantic segmentation. During the last decade, CNN based models have dominated these types of tasks. However, in 2021, transformer based networks were able to outperform the existing CNN approach, therefore, indicating a paradigm shift in the domain. This thesis aims to explore the use of a vision transformer, particularly a Swin Transformer, in an object detection and semantic segmentation framework, and compare it to a classical CNN on road scenes. In addition, since real-time execution is crucial for autonomous driving systems, the possibility of a parameter reduction of the transformer based network is investigated. The results appear to be advantageous for the Swin Transformer compared to the convolutional based network, considering both object detection and semantic segmentation. Furthermore, the analysis indicates that it is possible to reduce the computational complexity while retaining the performance.
104

Model Based Systems Engineering Approach to Autonomous Driving : Application of SysML for trajectory planning of autonomous vehicle

Veeramani Lekamani, Sarangi January 2018 (has links)
Model Based Systems Engineering (MBSE) approach aims at implementing various processes of Systems Engineering (SE) through diagrams that provide different perspectives of the same underlying system. This approach provides a basis that helps develop a complex system in a systematic manner. Thus, this thesis aims at deriving a system model through this approach for the purpose of autonomous driving, specifically focusing on developing the subsystem responsible for generating a feasible trajectory for a miniature vehicle, called AutoCar, to enable it to move towards a goal. The report provides a background on MBSE and System Modeling Language (SysML) which is used for modelling the system. With this background, an MBSE framework for AutoCar is derived and the overall system design is explained. This report further explains the concepts involved in autonomous trajectory planning followed by an introduction to Robot Operating System (ROS) and its application for trajectory planning of the system. The report concludes with a detailed analysis on the benefits of using this approach for developing a system. It also identifies the shortcomings of applying MBSE to system development. The report closes with a mention on how the given project can be further carried forward to be able to realize it on a physical system. / Modellbaserade systemteknikens (MBSE) inriktning syftar till att implementera de olika processerna i systemteknik (SE) genom diagram som ger olika perspektiv på samma underliggande system. Detta tillvägagångssätt ger en grund som hjälper till att utveckla ett komplext system på ett systematiskt sätt. Sålunda syftar denna avhandling att härleda en systemmodell genom detta tillvägagångssätt för autonom körning, med särskild inriktning på att utveckla delsystemet som är ansvarigt för att generera en genomförbar ban för en miniatyrbil, som kallas AutoCar, för att göra det möjligt att nå målet. Rapporten ger en bakgrund till MBSE and Systemmodelleringsspråk (SysML) som används för modellering av systemet. Med denna bakgrund, MBSE ramverket för AutoCar är härledt och den övergripande systemdesignen förklaras. I denna rapport förklaras vidare begreppen autonom banplanering följd av en introduktion till Robot Operating System (ROS) och dess tillämpning för systemplanering av systemet. Rapporten avslutas med en detaljerad analys av fördelarna med att använda detta tillvägagångssätt för att utveckla ett system. Det identifierar också bristerna för att tillämpa MBSE på systemutveckling. Rapporten stänger med en omtale om hur det givna projektet kan vidarebefordras för att kunna realisera det på ett fysiskt system.
105

Dynamic Object Removal for Point Cloud Map Creation in Autonomous Driving : Enhancing Map Accuracy via Two-Stage Offline Model / Dynamisk objekt borttagning för skapande av kartor över punktmoln vid autonom körning : Förbättrad kartnoggrannhet via tvåstegs offline-modell

Zhou, Weikai January 2023 (has links)
Autonomous driving is an emerging area that has been receiving an increasing amount of interest from different companies and researchers. 3D point cloud map is a significant foundation of autonomous driving as it provides essential information for localization and environment perception. However, when trying to gather road information for map creation, the presence of dynamic objects like vehicles, pedestrians, and cyclists will add noise and unnecessary information to the final map. In order to solve the problem, this thesis presents a novel two-stage model that contains a scan-to-scan removal stage and a scan-to-map generation stage. By designing the new three-branch neural network and new attention-based fusion block, the scan-to-scan part achieves a higher mean Intersection-over-Union (mIoU) score. By improving the ground plane estimation, the scan-to-map part can preserve more static points while removing a large number of dynamic points. The test on SemanticKITTI dataset and Scania dataset shows our two-stage model outperforms other baselines. / Autonom körning är ett nytt område som har fått ett allt större intresse från olika företag och forskare. Kartor med 3D-punktmoln är en viktig grund för autonom körning eftersom de ger viktig information för lokalisering och miljöuppfattning. När man försöker samla in väginformation för kartframställning kommer dock närvaron av dynamiska objekt som fordon, fotgängare och cyklister att lägga till brus och onödig information till den slutliga kartan. För att lösa problemet presenteras i den här avhandlingen en ny tvåstegsmodell som innehåller ett steg för borttagning av skanningar och ett steg för generering av skanningar och kartor. Genom att utforma det nya neurala nätverket med tre grenar och det nya uppmärksamhetsbaserade fusionsblocket uppnår scan-to-scan-delen högre mean Intersection-over-Union (mIoU)-poäng. Genom att förbättra uppskattningen av markplanet kan skanning-till-kartor-delen bevara fler statiska punkter samtidigt som ett stort antal dynamiska punkter avlägsnas. Testet av SemanticKITTI-dataset och Scania-dataset visar att vår tvåstegsmodell överträffar andra baslinjer.
106

Guardrail detection for landmark-based localization

Gumaelius, Nils January 2022 (has links)
A requirement for safe autonomous driving is to have an accurate global localization of the ego vehicle. Methods based on Global Navigation Satellite System (GNSS) are the most common but are not precise enough in areas without good satellite signals. Instead, methods likelandmark-based localization (LBL) can be used. In LBL, sensors onboard the vehicle detectlandmarks near the vehicle. With these detections, the vehicle’s position is deduced by looking up matching landmarks on a high-definition map. Commonly found along roads, stretching for long distances, guardrails are a great landmark that can be used for LBL. In this thesis, two different methods are proposed to detect and vectorize guardrails from vehicle sensor data to enable future map matching for LBL. The first method uses semantically labeled LiDAR data with pre-classified guardrail LiDAR points as input data. The method is based on the DBSCAN clustering algorithm to cluster and filter out false positives from the pre-classified LiDAR points. The second algorithm uses raw LiDAR data as input. The algorithm finds guardrail candidate points by segmenting high-densityareas and matching these with thresholds taken from the geometry of guardrails. Similar to the first method, these are then clustered into guardrail clusters. The clusters are then vectorized into the wanted output of a 2D vector, corresponding to points inside the guardrail with aspecific interval. To evaluate the performance of the proposed algorithms, simulations from real-life data are analyzed in both a quantitative and qualitative way. The qualitative experiments showcase that both methods perform well even in difficult scenarios. Timings of the simulations show that both methods are fast enough to be applicable in real-time use cases. The defined performance measures show that the method using raw LiDAR data is more robust and manages to detect more and longer parts of the guardrails.
107

Occlusion-Aware Autonomous Highway Driving : Tracking safe velocity bounds on potential hidden traffic for improved trajectory planning / Skymd-sikt-medveten autonom motorvägskörning : Bestämning av säkra hastighetsgränser för möjlig skymd trafik för förbättrad banplanering

van Haastregt, Jonne January 2023 (has links)
In order to reach higher levels of autonomy in autonomous driving, it is important to consider potential occluded traffic participants. Current research has considered occlusion-aware autonomous driving in urban situations. However, no implementations have shown good performance in high velocity situations such as highway driving yet, since the current methods are too conservative in these situations and result in frequent excessive braking. In this work a method is proposed that tracks boundaries on the velocity states of potential hidden traffic using reachability analysis. It is proven that the method can guarantee collision-free trajectories for any, potentially hidden, traffic. The method is evaluated on cut-in scenarios retrieved from a dataset of recorded traffic. The results show that tracking the velocity bounds for potentially hidden traffic results in more efficient trajectories up to 18 km/h faster compared to existing occlusion-aware methods. While the method shows clear improvements, it does not always manage to establish a velocity bound and at times excessive braking still occurs. Further work is thus necessary to ensure consistently well-performing occlusion-aware highway driving. / För att nå högre nivåer av autonomi vid autonom körning är det viktigt att ta hänsyn till möjliga skymda trafikanter. Aktuell forskning har övervägt skymd-sikt-medveten autonom körning i urbana situationer. Emellertid har inga implementeringar visat bra prestanda i höghastighetssituationer såsom motorvägskörning ännu, eftersom de nuvarande metoderna är för konservativa i dessa situationer och resulterar i frekventa överdrivna inbromsningar. I detta arbete föreslås en metod som bestämmer gränser för hastighetstillstånden för möjlig skymd trafik med hjälp av nåbarhetsanalys. Det är bevisat att metoden kan garantera kollisionsfria banor för all möjlig skymd trafik. Metoden utvärderas på scenarier hämtade från ett dataset av registrerad trafik. Resultaten visar att bestämning av hastighetsgränserna för möjlig skymd trafik resulterar i effektivare banor upp till 18 km/h snabbare jämfört med befintliga skymd-sikt-medvetna-metoder. Även om metoden visar tydliga förbättringar, lyckas den inte alltid fastställa en hastighetsgräns och ibland förekommer fortfarande överdriven inbromsning. Ytterligare arbete är därför nödvändigt för att säkerställa konsekvent välpresterande motorvägskörning under skymd sikt.
108

Sequential Semantic Segmentation of Streaming Scenes for Autonomous Driving

Guo Cheng (13892388) 03 February 2023 (has links)
<p>In traffic scene perception for autonomous vehicles, driving videos are available from in-car sensors such as camera and LiDAR for road detection and collision avoidance. There are some existing challenges in computer vision tasks for video processing, including object detection and tracking, semantic segmentation, etc. First, due to that consecutive video frames have a large data redundancy, traditional spatial-to-temporal approach inherently demands huge computational resource. Second, in many real-time scenarios, targets move continuously in the view as data streamed in. To achieve prompt response with minimum latency, an online model to process the streaming data in shift-mode is necessary. Third, in addition to shape-based recognition in spatial space, motion detection also replies on the inherent temporal continuity in videos. While current works either lack long-term memory for reference or consume a huge amount of computation. </p> <p><br></p> <p>The purpose of this work is to achieve strongly temporal-associated sensing results in real-time with minimum memory, which is continually embedded to a pragmatic framework for speed and path planning. It takes a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. It utilizes compact road profiles (RP) and motion profiles (MP) to identify path regions and dynamic objects, which drastically reduces video data to a lower dimension and increases sensing rate. Specifically, we sample one-pixel line at each video frame, the temporal congregation of lines from consecutive frames forms a road profile image; while motion profile consists of the average lines by sampling one-belt pixels at each frame. By applying the dense temporal resolution to compensate the sparse spatial resolution, this method reduces 3D streaming data into 2D image layout. Based on RP and MP under various weather conditions, there have three main tasks being conducted to contribute the knowledge domain in perception and planning for autonomous driving. </p> <p><br></p> <p>The first application is semantic segmentation of temporal-to-spatial streaming scenes, including recognition of road and roadside, driving events, objects in static or motion. Since the main vision sensing tasks for autonomous driving are identifying road area to follow and locating traffic to avoid collision, this work tackles this problem by using semantic segmentation upon road and motion profiles. Though one-pixel line may not contain sufficient spatial information of road and objects, the consecutive collection of lines as a temporal-spatial image provides intrinsic spatial layout because of the continuous observation and smooth vehicle motion. Moreover, by capturing the trajectory of pedestrians upon their moving legs in motion profile, we can robustly distinguish pedestrian in motion against smooth background. The experimental results of streaming data collected from various sensors including camera and LiDAR demonstrate that, in the reduced temporal-to-spatial space, an effective recognition of driving scene can be learned through Semantic Segmentation.</p> <p><br></p> <p>The second contribution of this work is that it accommodates standard semantic segmentation to sequential semantic segmentation network (SE3), which is implemented as a new benchmark for image and video segmentation. As most state-of-the-art methods are greedy for accuracy by designing complex structures at expense of memory use, which makes trained models heavily depend on GPUs and thus not applicable to real-time inference. Without accuracy loss, this work enables image segmentation at the minimum memory. Specifically, instead of predicting for image patch, SE3 generates output along with line scanning. By pinpointing the memory associated with the input line at each neural layer in the network, it preserves the same receptive field as patch size but saved the computation in the overlapped regions during network shifting. Generally, SE3 applies to most of the current backbone models in image segmentation, and furthers the inference by fusing temporal information without increasing computation complexity for video semantic segmentation. Thus, it achieves 3D association over long-range while under the computation of 2D setting. This will facilitate inference of semantic segmentation on light-weighted devices.</p> <p><br></p> <p>The third application is speed and path planning based on the sensing results from naturalistic driving videos. To avoid collision in a close range and navigate a vehicle in middle and far ranges, several RP/MPs are scanned continuously from different depths for vehicle path planning. The semantic segmentation of RP/MP is further extended to multi-depths for path and speed planning according to the sensed headway and lane position. We conduct experiments on profiles of different sensing depths and build up a smoothly planning framework according to their them. We also build an initial dataset of road and motion profiles with semantic labels from long HD driving videos. The dataset is published as additional contribution to the future work in computer vision and autonomous driving. </p>
109

Urban Virtual Test Field for HighlyAutomated Vehicle Systems

Degen, René January 2021 (has links)
Autonomous driving is one of the key technologies for increasing road safetyand reducing traffic volumes. Therefore, science and industry are workingtogether on new innovative solutions in this field of technology. One importantcomponent in this context is the approval and testing of new solution concepts,with special focus on the ones for urban environments. Not only because ofthe high diversity of traffic situations, but also because of the close contactbetween vulnerable road users (VRU) and automated vehicles.In the course of this work, a novel approach for testing automated drivingfunctions and vehicle systems in urban environments is presented. The goal isto create a safe and valid environment in which the automated vehicle and theVRU can meet and interact. The basis is a highly realistic virtual model of acity center. The physical behavior of the vehicle and VRU is recorded usingmeasurement technology and transferred to the virtual city model.Based on representative urban traffic scenarios, the functionality of the urbantest field is investigated from various points of view. Thereby, the focus is onreal-time capability and the quality of interaction between the vehicle and theVRU.The investigations show that both the real-time capability and the interactionpossibilities could be demonstrated. Further, the developed methodologies aresuitable for real time applications. / CityInMotion
110

Cognitively Guided Modeling of Visual Perception in Intelligent Vehicles

Plebe, Alice 20 April 2021 (has links)
This work proposes a strategy for visual perception in the context of autonomous driving. Despite the growing research aiming to implement self-driving cars, no artificial system can claim to have reached the driving performance of a human, yet. Humans---when not distracted or drunk---are still the best drivers you can currently find. Hence, the theories about the human mind and its neural organization could reveal precious insights on how to design a better autonomous driving agent. This dissertation focuses specifically on the perceptual aspect of driving, and it takes inspiration from four key theories on how the human brain achieves the cognitive capabilities required by the activity of driving. The first idea lies at the foundation of current cognitive science, and it argues that thinking nearly always involves some sort of mental simulation, which takes the form of imagery when dealing with visual perception. The second theory explains how the perceptual simulation takes place in neural circuits called convergence-divergence zones, which expand and compress information to extract abstract concepts from visual experience and code them into compact representations. The third theory highlights that perception---when specialized for a complex task as driving---is refined by experience in a process called perceptual learning. The fourth theory, namely the free-energy principle of predictive brains, corroborates the role of visual imagination as a fundamental mechanism of inference. In order to implement these theoretical principles, it is necessary to identify the most appropriate computational tools currently available. Within the consolidated and successful field of deep learning, I select the artificial architectures and strategies that manifest a sounding resemblance with their cognitive counterparts. Specifically, convolutional autoencoders have a strong correspondence with the architecture of convergence-divergence zones and the process of perceptual abstraction. The free-energy principle of predictive brains is related to variational Bayesian inference and the use of recurrent neural networks. In fact, this principle can be translated into a training procedure that learns abstract representations predisposed to predicting how the current road scenario will change in the future. The main contribution of this dissertation is a method to learn conceptual representations of the driving scenario from visual information. This approach forces a semantic internal organization, in the sense that distinct parts of the representation are explicitly associated to specific concepts useful in the context of driving. Specifically, the model uses as few as 16 neurons for each of the two basic concepts here considered: vehicles and lanes. At the same time, the approach biases the internal representations towards the ability to predict the dynamics of objects in the scene. This property of temporal coherence allows the representations to be exploited to predict plausible future scenarios and to perform a simplified form of mental imagery. In addition, this work includes a proposal to tackle the problem of opaqueness affecting deep neural networks. I present a method that aims to mitigate this issue, in the context of longitudinal control for automated vehicles. A further contribution of this dissertation experiments with higher-level spaces of prediction, such as occupancy grids, which could conciliate between the direct application to motor controls and the biological plausibility.

Page generated in 0.1226 seconds