• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Drivers' Visual Focus Areas on Complex Road Networks in Strategic Circumstances: An Experimental Analysis

Shah, Abhishek 14 December 2022 (has links)
No description available.
12

Enhanced 3D Object Detection And Tracking In Autonomous Vehicles: An Efficient Multi-modal Deep Fusion Approach

Priyank Kalgaonkar (10911822) 03 September 2024 (has links)
<p dir="ltr">This dissertation delves into a significant challenge for Autonomous Vehicles (AVs): achieving efficient and robust perception under adverse weather and lighting conditions. Systems that rely solely on cameras face difficulties with visibility over long distances, while radar-only systems struggle to recognize features like stop signs, which are crucial for safe navigation in such scenarios.</p><p dir="ltr">To overcome this limitation, this research introduces a novel deep camera-radar fusion approach using neural networks. This method ensures reliable AV perception regardless of weather or lighting conditions. Cameras, similar to human vision, are adept at capturing rich semantic information, whereas radars can penetrate obstacles like fog and darkness, similar to X-ray vision.</p><p dir="ltr">The thesis presents NeXtFusion, an innovative and efficient camera-radar fusion network designed specifically for robust AV perception. Building on the efficient single-sensor NeXtDet neural network, NeXtFusion significantly enhances object detection accuracy and tracking. A notable feature of NeXtFusion is its attention module, which refines critical feature representation for object detection, minimizing information loss when processing data from both cameras and radars.</p><p dir="ltr">Extensive experiments conducted on large-scale datasets such as Argoverse, Microsoft COCO, and nuScenes thoroughly evaluate the capabilities of NeXtDet and NeXtFusion. The results show that NeXtFusion excels in detecting small and distant objects compared to existing methods. Notably, NeXtFusion achieves a state-of-the-art mAP score of 0.473 on the nuScenes validation set, outperforming competitors like OFT by 35.1% and MonoDIS by 9.5%.</p><p dir="ltr">NeXtFusion’s excellence extends beyond mAP scores. It also performs well in other crucial metrics, including mATE (0.449) and mAOE (0.534), highlighting its overall effectiveness in 3D object detection. Visualizations of real-world scenarios from the nuScenes dataset processed by NeXtFusion provide compelling evidence of its capability to handle diverse and challenging environments.</p>
13

Jointly Ego Motion and Road Geometry Estimation for Advanced Driver Assistance Systems

Asghar, Jawaria January 2021 (has links)
For several years, there has been a remarkable increase in efforts to develop an autonomous car. Autonomous car systems combine various techniques of recognizing the environment with the help of the sensors and could drastically bring down the number of accidents on road by removing human conduct errors related to driver inattention and poor driving choices. In this research thesis, an algorithm for jointly ego-vehicle motion and road geometry estimation for Advanced Driver Assistance Systems (ADAS) is developed. The measurements are obtained from the inertial sensors, wheel speed sensors, steering wheel angle sensors, and camera. An Unscented Kalman Filter (UKF) is used for estimating the states of the non-linear system because UKF estimates the state in a simplified way without using complex computations. The proposed algorithm has been tested on a winding and straight road. The robustness and functioning of our algorithm have been demonstrated by conducting experiments involving the addition of noise to the measurements, reducing the process noise covariance matrix, and increasing the measurement noise covariance matrix and through these tests, we gained more trust in the working of our tracker. For evaluation, each estimated parameter has been compared with the reference signal which shows that the estimated signal matches the reference signal very well in both scenarios. We also compared our joint algorithm with individual ego-vehicle and road geometry algorithms. The results clearly show that better estimates are obtained from our algorithm when estimated jointly instead of estimating separately.

Page generated in 0.0522 seconds