Spelling suggestions: "subject:"autonomousdriving"" "subject:"autonomousracing""
41 |
Towards Improved Inertial Navigation By Reducing Errors Using Deep Learning MethodologyChen, Hua 13 July 2022 (has links)
No description available.
|
42 |
Autonomous Driving with a Simulation Trained Convolutional Neural NetworkFranke, Cameron 01 January 2017 (has links) (PDF)
Autonomous vehicles will help society if they can easily support a broad range of driving environments, conditions, and vehicles.
Achieving this requires reducing the complexity of the algorithmic system, easing the collection of training data, and verifying operation using real-world experiments. Our work addresses these issues by utilizing a reflexive neural network that translates images into steering and throttle commands. This network is trained using simulation data from Grand Theft Auto V~\cite{gtav}, which we augment to reduce the number of simulation hours driven. We then validate our work using a RC car system through numerous tests. Our system successfully drive 98 of 100 laps of a track with multiple road types and difficult turns; it also successfully avoids collisions with another vehicle in 90\% of the trials.
|
43 |
Performance enhancement of wide-range perception issues for autonomous vehiclesSharma, Suvash 13 May 2022 (has links) (PDF)
Due to the mission-critical nature of the autonomous driving application, underlying algorithms for scene understanding should be given special care during their development. Mostly, they should be designed with precise consideration of accuracy and run-time. Accuracy should be considered strictly which if compromised leads to faulty interpretation of the environment that may ultimately result in accidental scenarios. On the other hand, run-time holds an important position as the delayed understanding of the scene would hamper the real-time response of the vehicle which again leads to unforeseen accidental cases. These factors come as the functions of several factors such as the design and complexity of the algorithms, nature of the encountered objects or events in the environment, weather-induced effects, etc.
In this work, several novel scene understanding algorithms in terms- of semantic segmentation are devised. First, a transfer learning technique is proposed in order to transfer the knowledge from the data-rich domain to a data-scarce off-road driving domain for semantic segmentation such that the learned information is efficiently transferred from one domain to another while reducing run-time and increasing the accuracy. Second, the performance of several segmentation algorithms is assessed under the easy-to-severe rainy condition and two methods for achieving the robustness are proposed. Third, a new method of eradicating the rain from the input images is proposed. Since autonomous vehicles are rich in sensors and each of them has the capability of representing different types of information, it is worth fusing the information from all the possible sensors. Forth, a fusion mechanism with a novel algorithm that facilitates the use of local and non-local attention in a cross-modal scenario with RGB camera images and lidar-based images for road detection using semantic segmentation is executed and validated for different driving scenarios. Fifth, a conceptually new method of off-road driving trail representation, called Traversability, is introduced. To establish the correlation between a vehicle’s capability and the level of difficulty of the driving trail, a new dataset called CaT (CAVS Traversability) is introduced. This dataset is very helpful for future research in several off-road driving applications including military purposes, robotic navigation, etc.
|
44 |
Driver behavior impact on pedestrians' crossing experience in the conditionally autonomous driving context / Förarbeteendets påverkan på fotgängares upplevelse vid övergångställen i det villkorligt autonoma körförhållandeYang, Su January 2017 (has links)
Autonomous vehicles are developing at a rapid pace while pedestrians' experience with autonomous vehicles is less researched. This paper reported an exploratory study where 40 participants encountered a conditionally autonomous vehicle with unusual driver behaviors at crossing by watching videos and photos. Questionnaires and semi-structured interviews were used to investigate pedestrians' experience. The results showed distracted driver behaviors in the conditionally autonomous driving context had negative impact on pedestrians' crossing experience. Black window on conditionally autonomous vehicles made pedestrians feel uncomfortable and worried. / Autonoma fordon utvecklas i snabb takt medan fotgängares erfarenhet av autonoma fordon är mindre undersökt. I denna uppsats redovisades en undersökande studie där 40 deltagare observerat ett villkorligt autonomt fordon med ovanliga förarbeteenden vid en korsning, genom att titta på videor och foton. Frågeformulär och semi-strukturerade intervjuer användes för att undersöka fotgängares erfarenhet. Resultaten visade att distraherade förarbeteenden i det villkorliga autonoma förhållandet hade negativ inverkan på fotgängares upplevelse vid övergångsstället. Svarta vindrutor på villkorligt autonoma fordon gör att fotgängare känner sig obekväma och oroliga.
|
45 |
How to establish robotaxi trustworthiness through In-Vehicle interaction design.Hua, Tianxin 22 August 2022 (has links)
No description available.
|
46 |
A requirements engineering approach in the development of an AI-based classification system for road markings in autonomous driving : a case studySunkara, Srija January 2023 (has links)
Background: Requirements engineering (RE) is the process of identifying, defining, documenting, and validating requirements. However, RE approaches are usually not applied to AI-based systems due to their ambiguity and is still a growing subject. Research also shows that the quality of ML-based systems is affected due to the lack of a structured RE process. Hence, there is a need to apply RE techniques in the development of ML-based systems. Objectives: This research aims to identify the practices and challenges concerning RE techniques for AI-based systems in autonomous driving and then to identify a suitable RE approach to overcome the identified challenges. Further, the thesis aims to check the feasibility of the selected RE approach in developing a prototype AI-based classification system for road markings. Methods: A combination of research methods has been used for this research. We apply techniques of interviews, case study, and a rapid literature review. The case company is Scania CV AB. A literature review is conducted to identify the possible RE approaches that can overcome the challenges identified through interviews and discussions with the stakeholders. A suitable RE approach, GR4ML, is found and used to develop and validate an AI-based classification system for road markings. Results: Results indicate that RE is a challenging subject in autonomous driving. Several challenges are faced at the case company in eliciting, specifying, and validating requirements for AI-based systems, especially in autonomous driving. Results also show that the views in the GR4ML framework were suitable for the specification of system requirements and addressed most challenges identified at the case company. The iterative goal-oriented approach maintained flexibility during development. Through the system's development, it was identified that the Random Forest Classifier outperformed the Logistic Regressor and Support Vector Machine for the road markings classification. Conclusions: The validation of the system suggests that the goal-oriented requirements engineering approach and the GR4ML framework addressed most challenges identified in eliciting, specifying, and validating requirements for AI-based systems at the case company. The views in the GR4ML framework provide a good overview of the functional and non-functional requirements of the lower-level systems in autonomous driving. However, the GR4ML framework might not be suitable for validation of higher-level AI-based systems in autonomous driving due to their complexity.
|
47 |
Sequential Semantic Segmentation of Streaming Scenes for Autonomous DrivingCheng, Guo 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In traffic scene perception for autonomous vehicles, driving videos are available from
in-car sensors such as camera and LiDAR for road detection and collision avoidance. There are some existing challenges in computer vision tasks for video processing, including object detection and tracking, semantic segmentation, etc. First, due to that consecutive video frames have a large data redundancy, traditional spatial-to-temporal approach inherently demands huge computational resource. Second, in many real-time scenarios, targets move continuously in the view as data streamed in. To achieve prompt response with minimum latency, an online model to process the streaming data in shift-mode is necessary. Third, in addition to shape-based recognition in spatial space, motion detection also replies on the inherent temporal continuity in videos. While current works either lack long-term memory for reference or consume a huge amount of computation.
The purpose of this work is to achieve strongly temporal-associated sensing results in
real-time with minimum memory, which is continually embedded to a pragmatic framework
for speed and path planning. It takes a temporal-to-spatial approach to cope with fast
moving vehicles in autonomous navigation. It utilizes compact road profiles (RP) and motion profiles (MP) to identify path regions and dynamic objects, which drastically reduces video data to a lower dimension and increases sensing rate. Specifically, we sample one-pixel line at each video frame, the temporal congregation of lines from consecutive frames forms a road profile image; while motion profile consists of the average lines by sampling one-belt pixels at each frame. By applying the dense temporal resolution to compensate the sparse spatial resolution, this method reduces 3D streaming data into 2D image layout. Based on RP and MP under various weather conditions, there have three main tasks being conducted to contribute the knowledge domain in perception and planning for autonomous driving.
The first application is semantic segmentation of temporal-to-spatial streaming scenes,
including recognition of road and roadside, driving events, objects in static or motion. Since the main vision sensing tasks for autonomous driving are identifying road area to follow and locating traffic to avoid collision, this work tackles this problem by using semantic segmentation upon road and motion profiles. Though one-pixel line may not contain sufficient spatial information of road and objects, the consecutive collection of lines as a temporal-spatial image provides intrinsic spatial layout because of the continuous observation and smooth vehicle motion. Moreover, by capturing the trajectory of pedestrians upon their moving legs in motion profile, we can robustly distinguish pedestrian in motion against smooth background. The experimental results of streaming data collected from various sensors including camera and LiDAR demonstrate that, in the reduced temporal-to-spatial space, an effective recognition of driving scene can be learned through Semantic Segmentation.
The second contribution of this work is that it accommodates standard semantic segmentation to sequential semantic segmentation network (SE3), which is implemented as a new benchmark for image and video segmentation. As most state-of-the-art methods are greedy for accuracy by designing complex structures at expense of memory use, which makes trained models heavily depend on GPUs and thus not applicable to real-time inference. Without accuracy loss, this work enables image segmentation at the minimum memory. Specifically, instead of predicting for image patch, SE3 generates output along with line scanning. By pinpointing the memory associated with the input line at each neural layer in the network, it preserves the same receptive field as patch size but saved the computation in the overlapped regions during network shifting. Generally, SE3 applies to most of the current backbone models in image segmentation, and furthers the inference by fusing temporal information without increasing computation complexity for video semantic segmentation. Thus, it achieves 3D association over long-range while under the computation of 2D setting. This will facilitate inference of semantic segmentation on light-weighted devices.
The third application is speed and path planning based on the sensing results from
naturalistic driving videos. To avoid collision in a close range and navigate a vehicle in
middle and far ranges, several RP/MPs are scanned continuously from different depths for
vehicle path planning. The semantic segmentation of RP/MP is further extended to multi-depths for path and speed planning according to the sensed headway and lane position. We conduct experiments on profiles of different sensing depths and build up a smoothly planning framework according to their them. We also build an initial dataset of road and motion profiles with semantic labels from long HD driving videos. The dataset is published as additional contribution to the future work in computer vision and autonomous driving.
|
48 |
Towards a Robust and Efficient Deep Neural Network for the Lidar Point Cloud PerceptionZhou, Zixiang 01 January 2023 (has links) (PDF)
In recent years, LiDAR has emerged as a crucial perception tool for robotics and autonomous vehicles. However, most LiDAR perception methods are adapted from 2D image-based deep learning methods, which are not well-suited to the unique geometric structure of LiDAR point cloud data. This domain gap poses challenges for the fast-growing LiDAR perception tasks. This dissertation aims to investigate suitable deep network structures tailored for LiDAR point cloud data, and therefore design a more efficient and robust LiDAR perception framework. Our approach to address this challenge is twofold. First, we recognize that LiDAR point cloud data is characterized by an imbalanced and sparse distribution in the 3D space, which is not effectively captured by traditional voxel-based convolution methods that treat the 3D map uniformly. To address this issue, we aim to develop a more efficient feature extraction method by either counteracting the imbalanced feature distribution or incorporating global contextual information using a transformer decoder. Second, besides the gap between the 2D and 3D domains, we acknowledge that different LiDAR perception tasks have unique requirements and therefore require separate network designs, resulting in significant network redundancy. To address this, we aim to improve the efficiency of the network design by developing a unified multi-task network that shares the feature-extracting stage and performs different tasks using specific heads. More importantly, we aim to enhance the accuracy of different tasks by leveraging the multi-task learning framework to enable mutual improvements. We propose different models based on these motivations and evaluate them on several large-scale LiDAR point cloud perception datasets, achieving state-of-the-art performance. Lastly, we summarize the key findings of this dissertation and propose future research directions.
|
49 |
Predicting comfort in autonomous driving from vibration measurements using machine learning models / Komfort förutsägelse i självkörande bilar med avnändning av maskininlärning metoderAsarar, Kate January 2021 (has links)
Highly automated driving is approaching reality at a high speed. BMW is planningto put its first autonomous driving vehicle on the road already by 2021. The path torealising this new technology is however, full of challenges. Not only the transverseand longitudinal dynamic vehicle motion play an important role in experiencedcomfort but also the requirements and expectations of the occupants regarding thevertical dynamic vibration behaviour. Especially during long trips on the motorwaywhere the so far active driver becomes the chauffeured passenger, who reads, worksor sleeps in his newly gained time. These new use-cases create new requirements forthe future design of driving comfort which are yet to be fully discovered.This work was carried out at the BMW headquarters and had the aim to usedifferent machine learning models to investigate and identify patterns between thesubjective comfort values reported by participants in a study, on a comfort scale of 1-7 and the mechanical vibrations that they experienced, measured inm/s2. The datawas collected in a previous independent study and statistical methods were used toinsure the quality of the data. A comparison of the ISO 2631-1 comfort ratings andthe study’s findings is done to understand the need for a more sophisticated model to predict comfort in autonomous driving. The work continued by investigating different dimensionality reduction methods and their influence on the performance of the models. The process used to build, optimise and validate neural networks and other models is included in the method chapter and the results are presented. The work ends with a discussion of both the prediction results and the modelsre-usability. The machine learning models investigated in this thesis have shown great po-tential for detecting complex pattern that link feelings and thoughts to mechanical variables. The models were able to predict the correct level of comfort with up to50% precision when trying to predict 6 or 7 levels of comfort. When divided into high versus low discomfort, i.e. predicting one of two comfort levels, the models were able to achieve a precision of up to 75.4%.Excluded from this thesis is the study of differences in attentive vs inattentive state when being driven in an autonomous driving vehicle. It became clear shortly before the start of this work, that the experiment that yielded the data used for it failed to find a statistically significant difference between the two states. / Självkörande bilar är snart inte längre en dröm utan en mycket sann verklighet. År 2021 planerar BMW att släppa ut sin första autonoma bil på vägarna. Dock är vägen till att förverkliga denna nya teknik full av utmaningar. Utöver den tvärgående och längsgående dynamiska styrningen av fordonet, så spelar även passagerarens förväntningar på det vertikala dynamiska vibrationsbeteendet en växande roll. Speciellt under långa resor på motorvägen där den för nuvarande aktiva föraren blir passagerare, som läser, arbetar eller sover under sin nyvunna tid. De nya användarsenarierna ställer i sin tur nya krav på bilens komfort. Krav som inte har blivit hittills utförligt undersäkta, fastän de kan komma att spela en stor roll i teknikens framgång.Detta examensarbete genomfördes hos BMW:s huvudkontor i Tyskland och hade som mål att undersöka olika maskininlärningsmodeller och deras förmåga att identifiera mönster mellan de subjektiva komfortvärden som rapporterats av deltagarna i en studie, givna på skala 1-7, och de mekaniska vibrationerna som de upplevde mätta i m/s^2. Uppgifterna samlades in i en tidigare oberoende studie. Statistiska metoder användes för att säkerställa datakvaliteten. I detta arbeter har en jämförelse mellan ISO 2631-1-komfortbedömningar och undersökningsresultaten gjorts för att förstå behovet av en mer sofistikerad komfortstandard för att objektifera komfort i självkörande bilar. Arbetet fortsatte med att undersöka olika metoder för att minska datadimensionerna och deras inflytande på modellernas prestanda. Processen som används för att bygga, optimera och validera neurala nätverk och andra modeller är inkluderad i metoddelen och resultaten är presenterade och förklarade därefter. Arbetet avslutas med en diskussion kring både resultatets validitet och modellernas användbarhet.De maskininlärningsmodeller som undersöktes i detta examensarbete har visat stor potential för att upptäcka komplexa mönster som kopplar känslor och tankar till mekaniska variabler. Modellerna kunde förutsäga rätt komfortnivå med upp till 50% precision när 6 eller 7 nivåer av komfort användes. Vid uppdelning i hög mot låg komfort, dvs att kunna förutsäga en av två komfortnivåer, kunde modellerna uppnå en precision på upp till 75.4%.
|
50 |
A study on lane detection methods for autonomous drivingCudrano, Paolo January 2019 (has links)
Machine perception is a key element for the research on autonomous driving vehicles. In particular, we focus on the problem of lane detection with a single camera. Many lane detection systems have been developed and many algorithms have been published over the years. However, while they are already commercially available to deliver lane departure warnings, their reliability is still unsatisfactory for fully autonomous scenarios.
In this work, we questioned the reasons for such limitations. After examining the state of the art and the relevant literature, we identified the key methodologies adopted. We present a self-standing discussion of bird’s eye view (BEV) warping and common image preprocessing techniques, followed by gradient-based and color-based feature extraction and selection. Line fitting algorithms are then described, including least squares methods, Hough transform and random sample consensus (RANSAC). Polynomial and spline models are considered. As a result, a general processing pipeline emerged. We further analyzed each key technique by implementing it and performing experiments using data we previously collected. At the end of our evaluation, we designed and developed an overall system, finally studying its behavior.
This analysis allowed us on one hand to gain insight into the reasons holding back present systems, and on the other to propose future developments in those directions. / Thesis / Master of Science (MSc)
|
Page generated in 0.0955 seconds