• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 143
  • 143
  • 57
  • 46
  • 32
  • 32
  • 31
  • 30
  • 29
  • 26
  • 25
  • 24
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Driver behavior impact on pedestrians' crossing experience in the conditionally autonomous driving context / Förarbeteendets påverkan på fotgängares upplevelse vid övergångställen i det villkorligt autonoma körförhållande

Yang, Su January 2017 (has links)
Autonomous vehicles are developing at a rapid pace while pedestrians' experience with autonomous vehicles is less researched. This paper reported an exploratory study where 40 participants encountered a conditionally autonomous vehicle with unusual driver behaviors at crossing by watching videos and photos. Questionnaires and semi-structured interviews were used to investigate pedestrians' experience. The results showed distracted driver behaviors in the conditionally autonomous driving context had negative impact on pedestrians' crossing experience. Black window on conditionally autonomous vehicles made pedestrians feel uncomfortable and worried. / Autonoma fordon utvecklas i snabb takt medan fotgängares erfarenhet av autonoma fordon är mindre undersökt. I denna uppsats redovisades en undersökande studie där 40 deltagare observerat ett villkorligt autonomt fordon med ovanliga förarbeteenden vid en korsning, genom att titta på videor och foton. Frågeformulär och semi-strukturerade intervjuer användes för att undersöka fotgängares erfarenhet. Resultaten visade att distraherade förarbeteenden i det villkorliga autonoma förhållandet hade negativ inverkan på fotgängares upplevelse vid övergångsstället. Svarta vindrutor på villkorligt autonoma fordon gör att fotgängare känner sig obekväma och oroliga.
42

How to establish robotaxi trustworthiness through In-Vehicle interaction design.

Hua, Tianxin 22 August 2022 (has links)
No description available.
43

A requirements engineering approach in the development of an AI-based classification system for road markings in autonomous driving : a case study

Sunkara, Srija January 2023 (has links)
Background: Requirements engineering (RE) is the process of identifying, defining, documenting, and validating requirements. However, RE approaches are usually not applied to AI-based systems due to their ambiguity and is still a growing subject. Research also shows that the quality of ML-based systems is affected due to the lack of a structured RE process. Hence, there is a need to apply RE techniques in the development of ML-based systems.  Objectives: This research aims to identify the practices and challenges concerning RE techniques for AI-based systems in autonomous driving and then to identify a suitable RE approach to overcome the identified challenges. Further, the thesis aims to check the feasibility of the selected RE approach in developing a prototype AI-based classification system for road markings.  Methods: A combination of research methods has been used for this research. We apply techniques of interviews, case study, and a rapid literature review. The case company is Scania CV AB. A literature review is conducted to identify the possible RE approaches that can overcome the challenges identified through interviews and discussions with the stakeholders. A suitable RE approach, GR4ML, is found and used to develop and validate an AI-based classification system for road markings.  Results: Results indicate that RE is a challenging subject in autonomous driving. Several challenges are faced at the case company in eliciting, specifying, and validating requirements for AI-based systems, especially in autonomous driving. Results also show that the views in the GR4ML framework were suitable for the specification of system requirements and addressed most challenges identified at the case company. The iterative goal-oriented approach maintained flexibility during development. Through the system's development, it was identified that the Random Forest Classifier outperformed the Logistic Regressor and Support Vector Machine for the road markings classification.  Conclusions: The validation of the system suggests that the goal-oriented requirements engineering approach and the GR4ML framework addressed most challenges identified in eliciting, specifying, and validating requirements for AI-based systems at the case company. The views in the GR4ML framework provide a good overview of the functional and non-functional requirements of the lower-level systems in autonomous driving. However, the GR4ML framework might not be suitable for validation of higher-level AI-based systems in autonomous driving due to their complexity.
44

Sequential Semantic Segmentation of Streaming Scenes for Autonomous Driving

Cheng, Guo 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In traffic scene perception for autonomous vehicles, driving videos are available from in-car sensors such as camera and LiDAR for road detection and collision avoidance. There are some existing challenges in computer vision tasks for video processing, including object detection and tracking, semantic segmentation, etc. First, due to that consecutive video frames have a large data redundancy, traditional spatial-to-temporal approach inherently demands huge computational resource. Second, in many real-time scenarios, targets move continuously in the view as data streamed in. To achieve prompt response with minimum latency, an online model to process the streaming data in shift-mode is necessary. Third, in addition to shape-based recognition in spatial space, motion detection also replies on the inherent temporal continuity in videos. While current works either lack long-term memory for reference or consume a huge amount of computation. The purpose of this work is to achieve strongly temporal-associated sensing results in real-time with minimum memory, which is continually embedded to a pragmatic framework for speed and path planning. It takes a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. It utilizes compact road profiles (RP) and motion profiles (MP) to identify path regions and dynamic objects, which drastically reduces video data to a lower dimension and increases sensing rate. Specifically, we sample one-pixel line at each video frame, the temporal congregation of lines from consecutive frames forms a road profile image; while motion profile consists of the average lines by sampling one-belt pixels at each frame. By applying the dense temporal resolution to compensate the sparse spatial resolution, this method reduces 3D streaming data into 2D image layout. Based on RP and MP under various weather conditions, there have three main tasks being conducted to contribute the knowledge domain in perception and planning for autonomous driving. The first application is semantic segmentation of temporal-to-spatial streaming scenes, including recognition of road and roadside, driving events, objects in static or motion. Since the main vision sensing tasks for autonomous driving are identifying road area to follow and locating traffic to avoid collision, this work tackles this problem by using semantic segmentation upon road and motion profiles. Though one-pixel line may not contain sufficient spatial information of road and objects, the consecutive collection of lines as a temporal-spatial image provides intrinsic spatial layout because of the continuous observation and smooth vehicle motion. Moreover, by capturing the trajectory of pedestrians upon their moving legs in motion profile, we can robustly distinguish pedestrian in motion against smooth background. The experimental results of streaming data collected from various sensors including camera and LiDAR demonstrate that, in the reduced temporal-to-spatial space, an effective recognition of driving scene can be learned through Semantic Segmentation. The second contribution of this work is that it accommodates standard semantic segmentation to sequential semantic segmentation network (SE3), which is implemented as a new benchmark for image and video segmentation. As most state-of-the-art methods are greedy for accuracy by designing complex structures at expense of memory use, which makes trained models heavily depend on GPUs and thus not applicable to real-time inference. Without accuracy loss, this work enables image segmentation at the minimum memory. Specifically, instead of predicting for image patch, SE3 generates output along with line scanning. By pinpointing the memory associated with the input line at each neural layer in the network, it preserves the same receptive field as patch size but saved the computation in the overlapped regions during network shifting. Generally, SE3 applies to most of the current backbone models in image segmentation, and furthers the inference by fusing temporal information without increasing computation complexity for video semantic segmentation. Thus, it achieves 3D association over long-range while under the computation of 2D setting. This will facilitate inference of semantic segmentation on light-weighted devices. The third application is speed and path planning based on the sensing results from naturalistic driving videos. To avoid collision in a close range and navigate a vehicle in middle and far ranges, several RP/MPs are scanned continuously from different depths for vehicle path planning. The semantic segmentation of RP/MP is further extended to multi-depths for path and speed planning according to the sensed headway and lane position. We conduct experiments on profiles of different sensing depths and build up a smoothly planning framework according to their them. We also build an initial dataset of road and motion profiles with semantic labels from long HD driving videos. The dataset is published as additional contribution to the future work in computer vision and autonomous driving.
45

Towards a Robust and Efficient Deep Neural Network for the Lidar Point Cloud Perception

Zhou, Zixiang 01 January 2023 (has links) (PDF)
In recent years, LiDAR has emerged as a crucial perception tool for robotics and autonomous vehicles. However, most LiDAR perception methods are adapted from 2D image-based deep learning methods, which are not well-suited to the unique geometric structure of LiDAR point cloud data. This domain gap poses challenges for the fast-growing LiDAR perception tasks. This dissertation aims to investigate suitable deep network structures tailored for LiDAR point cloud data, and therefore design a more efficient and robust LiDAR perception framework. Our approach to address this challenge is twofold. First, we recognize that LiDAR point cloud data is characterized by an imbalanced and sparse distribution in the 3D space, which is not effectively captured by traditional voxel-based convolution methods that treat the 3D map uniformly. To address this issue, we aim to develop a more efficient feature extraction method by either counteracting the imbalanced feature distribution or incorporating global contextual information using a transformer decoder. Second, besides the gap between the 2D and 3D domains, we acknowledge that different LiDAR perception tasks have unique requirements and therefore require separate network designs, resulting in significant network redundancy. To address this, we aim to improve the efficiency of the network design by developing a unified multi-task network that shares the feature-extracting stage and performs different tasks using specific heads. More importantly, we aim to enhance the accuracy of different tasks by leveraging the multi-task learning framework to enable mutual improvements. We propose different models based on these motivations and evaluate them on several large-scale LiDAR point cloud perception datasets, achieving state-of-the-art performance. Lastly, we summarize the key findings of this dissertation and propose future research directions.
46

Predicting comfort in autonomous driving from vibration measurements using machine learning models / Komfort förutsägelse i självkörande bilar med avnändning av maskininlärning metoder

Asarar, Kate January 2021 (has links)
Highly automated driving is approaching reality at a high speed. BMW is planningto put its first autonomous driving vehicle on the road already by 2021. The path torealising this new technology is however, full of challenges. Not only the transverseand longitudinal dynamic vehicle motion play an important role in experiencedcomfort but also the requirements and expectations of the occupants regarding thevertical dynamic vibration behaviour. Especially during long trips on the motorwaywhere the so far active driver becomes the chauffeured passenger, who reads, worksor sleeps in his newly gained time. These new use-cases create new requirements forthe future design of driving comfort which are yet to be fully discovered.This work was carried out at the BMW headquarters and had the aim to usedifferent machine learning models to investigate and identify patterns between thesubjective comfort values reported by participants in a study, on a comfort scale of 1-7 and the mechanical vibrations that they experienced, measured inm/s2. The datawas collected in a previous independent study and statistical methods were used toinsure the quality of the data. A comparison of the ISO 2631-1 comfort ratings andthe study’s findings is done to understand the need for a more sophisticated model to predict comfort in autonomous driving. The work continued by investigating different dimensionality reduction methods and their influence on the performance of the models. The process used to build, optimise and validate neural networks and other models is included in the method chapter and the results are presented. The work ends with a discussion of both the prediction results and the modelsre-usability. The machine learning models investigated in this thesis have shown great po-tential for detecting complex pattern that link feelings and thoughts to mechanical variables. The models were able to predict the correct level of comfort with up to50% precision when trying to predict 6 or 7 levels of comfort. When divided into high versus low discomfort, i.e. predicting one of two comfort levels, the models were able to achieve a precision of up to 75.4%.Excluded from this thesis is the study of differences in attentive vs inattentive state when being driven in an autonomous driving vehicle. It became clear shortly before the start of this work, that the experiment that yielded the data used for it failed to find a statistically significant difference between the two states. / Självkörande bilar är snart inte längre en dröm utan en mycket sann verklighet. År 2021 planerar BMW att släppa ut sin första autonoma bil på vägarna. Dock är vägen till att förverkliga denna nya teknik full av utmaningar. Utöver den tvärgående och längsgående dynamiska styrningen av fordonet, så spelar även passagerarens förväntningar på det vertikala dynamiska vibrationsbeteendet en växande roll. Speciellt under långa resor på motorvägen där den för nuvarande aktiva föraren blir passagerare, som läser, arbetar eller sover under sin nyvunna tid. De nya användarsenarierna ställer i sin tur nya krav på bilens komfort. Krav som inte har blivit hittills utförligt undersäkta, fastän de kan komma att spela en stor roll i teknikens framgång.Detta examensarbete genomfördes hos BMW:s huvudkontor i Tyskland och hade som mål att undersöka olika maskininlärningsmodeller och deras förmåga att identifiera mönster mellan de subjektiva komfortvärden som rapporterats av deltagarna i en studie, givna på skala 1-7, och de mekaniska vibrationerna som de upplevde mätta i m/s^2. Uppgifterna samlades in i en tidigare oberoende studie. Statistiska metoder användes för att säkerställa datakvaliteten. I detta arbeter har en jämförelse mellan ISO 2631-1-komfortbedömningar och undersökningsresultaten gjorts för att förstå behovet av en mer sofistikerad komfortstandard för att objektifera komfort i självkörande bilar. Arbetet fortsatte med att undersöka olika metoder för att minska datadimensionerna och deras inflytande på modellernas prestanda. Processen som används för att bygga, optimera och validera neurala nätverk och andra modeller är inkluderad i metoddelen och resultaten är presenterade och förklarade därefter. Arbetet avslutas med en diskussion kring både resultatets validitet och modellernas användbarhet.De maskininlärningsmodeller som undersöktes i detta examensarbete har visat stor potential för att upptäcka komplexa mönster som kopplar känslor och tankar till mekaniska variabler. Modellerna kunde förutsäga rätt komfortnivå med upp till 50% precision när 6 eller 7 nivåer av komfort användes. Vid uppdelning i hög mot låg komfort, dvs att kunna förutsäga en av två komfortnivåer, kunde modellerna uppnå en precision på upp till 75.4%.
47

A study on lane detection methods for autonomous driving

Cudrano, Paolo January 2019 (has links)
Machine perception is a key element for the research on autonomous driving vehicles. In particular, we focus on the problem of lane detection with a single camera. Many lane detection systems have been developed and many algorithms have been published over the years. However, while they are already commercially available to deliver lane departure warnings, their reliability is still unsatisfactory for fully autonomous scenarios. In this work, we questioned the reasons for such limitations. After examining the state of the art and the relevant literature, we identified the key methodologies adopted. We present a self-standing discussion of bird’s eye view (BEV) warping and common image preprocessing techniques, followed by gradient-based and color-based feature extraction and selection. Line fitting algorithms are then described, including least squares methods, Hough transform and random sample consensus (RANSAC). Polynomial and spline models are considered. As a result, a general processing pipeline emerged. We further analyzed each key technique by implementing it and performing experiments using data we previously collected. At the end of our evaluation, we designed and developed an overall system, finally studying its behavior. This analysis allowed us on one hand to gain insight into the reasons holding back present systems, and on the other to propose future developments in those directions. / Thesis / Master of Science (MSc)
48

Convolutional Neural Network Detection and Classification System Using an Infrared Camera and Image Detection Uncertainty Estimation

Miethig, Benjamin Taylor January 2019 (has links)
Autonomous vehicles are equipped with systems that can detect and track the objects in a vehicle’s vicinity and make appropriate driving decisions accordingly. Infrared (IR) cameras are not typically employed on these systems, but the new information that can be supplied by IR cameras can help improve the probability of detecting all objects in a vehicle’s surroundings. The purpose of this research is to investigate how IR imaging can be leveraged to improve existing autonomous driving detection systems. This research serves as a proof-of-concept demonstration. In order to achieve detection using thermal images, raw data from seven different driving scenarios was captured and labelled using a calibrated camera. Calibrating the camera made it possible to estimate the distance to objects within the image frame. The labelled images (ground truth data) were then used to train several YOLOv2 neural networks to detect similar objects in other image frames. Deeper YOLOv2 networks trained on larger amounts of data were shown to perform better on both precision and recall metrics. A novel method of estimating pixel error in detected object locations has also been proposed which can be applied to any detection algorithm that has corresponding ground truth data. The pixel errors were shown to be normally distributed with unique spreads about different ranges of y-pixels. Low correlations were seen in detection errors in the x-pixel direction. This methodology can be used to create a gate estimation for the detected pixel location of an object. Detection using IR imaging has been shown to have promising results for applications where typical autonomous sensors can have difficulties. The work done in this thesis has shown that the additional information supplied by IR cameras has potential to improve existing autonomous sensory systems. / Thesis / Master of Applied Science (MASc)
49

Real-time Detection and Tracking of Moving Objects Using Deep Learning and Multi-threaded Kalman Filtering : A joint solution of 3D object detection and tracking for Autonomous Driving

Söderlund, Henrik January 2019 (has links)
Perception for autonomous drive systems is the most essential function for safe and reliable driving. LiDAR sensors can be used for perception and are vying for being crowned as an essential element in this task. In this thesis, we present a novel real-time solution for detection and tracking of moving objects which utilizes deep learning based 3D object detection. Moreover, we present a joint solution which utilizes the predictability of Kalman Filters to infer object properties and semantics to the object detection algorithm, resulting in a closed loop of object detection and object tracking.On one hand, we present YOLO++, a 3D object detection network on point clouds only. A network that expands YOLOv3, the latest contribution to standard real-time object detection for three-channel images. Our object detection solution is fast. It processes images at 20 frames per second. Our experiments on the KITTI benchmark suite show that we achieve state-of-the-art efficiency but with a mediocre accuracy for car detection, which is comparable to the result of Tiny-YOLOv3 on the COCO dataset. The main advantage with YOLO++ is that it allows for fast detection of objects with rotated bounding boxes, something which Tiny-YOLOv3 can not do. YOLO++ also performs regression of the bounding box in all directions, allowing for 3D bounding boxes to be extracted from a bird's eye view perspective. On the other hand, we present a Multi-threaded Object Tracking (MTKF) solution for multiple object tracking. Each unique observation is associated to a thread with a novel concurrent data association process. Each of the threads contain an Extended Kalman Filter that is used for predicting and estimating an associated object's state over time. Furthermore, a LiDAR odometry algorithm was used to obtain absolute information about the movement of objects, since the movement of objects are inherently relative to the sensor perceiving them. We obtain 33 state updates per second with an equal amount of threads to the number of cores in our main workstation.Even if the joint solution has not been tested on a system with enough computational power, it is ready for deployment. Using YOLO++ in combination with MTKF, our real-time constraint of 10 frames per second is satisfied by a large margin. Finally, we show that our system can take advantage of the predicted semantic information from the Kalman Filters in order to enhance the inference process in our object detection architecture.
50

A systematic Mapping study of ADAS and Autonomous Driving

Agha Jafari Wolde, Bahareh January 2019 (has links)
Nowadays, autonomous driving revolution is getting closer to reality. To achieve the Autonomous driving the first step is to develop the Advanced Driver Assistance System (ADAS). Driver-assistance systems are one of the fastest-growing segments in automotive electronics since already there are many forms of ADAS available. To investigate state of art of development of ADAS towards Autonomous Driving, we develop Systematic Mapping Study (SMS). SMS methodology is used to collect, classify, and analyze the relevant publications. A classification is introduced based on the developments carried out in ADAS towards Autonomous driving. According to SMS methodology, we identified 894 relevant publications about ADAS and its developmental journey toward Autonomous Driving completed from 2012 to 2016. We classify the area of our research under three classifications: technical classifications, research types and research contributions. The related publications are classified under thirty-three technical classifications. This thesis sheds light on a better understanding of the achievements and shortcomings in this area. By evaluating collected results, we answer our seven research questions. The result specifies that most of the publications belong to the Models and Solution Proposal from the research type and contribution. The least number of the publications belong to the Automated…Autonomous driving from the technical classification which indicated the lack of publications in this area.

Page generated in 0.0785 seconds