• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Map Based Sensor Fusion for Lane Boundary Estimation on ADAS / Sensorfusion med Kartdata för Estimering av Körfältsgränser på ADAS

Faghi, Puya January 2023 (has links)
A vehicles ability to detect and estimate its surroundings is important for ensuring the safety of the vehicle and passengers regardless of the level of vehicle autonomy. With an improved road and lane estimation, advanced driver-assistance systems will be able to provide earlier and more accurate warnings and actions to prevent a possible accident. Current lane boundary estimations rely on camera and inertial sensor data to detect and estimate relevant lane boundaries in the vehicles surroundings. The current lane boundary estimation system struggles to provide correct estimations at distances exceeding 75 meters and has a performance which is affected by environmental effects. The methods in this thesis show how map data, together with sensor fusion with radar, camera, inertial measurement unit and global navigation satellite system data is able to provide an improvement to the lane boundary estimations. The map based estimation system is implemented and evaluated for high speed roads (highways and country roads) where lane boundary estimations for distances above 75 meters are needed. The results are conducted in a simulate environment and show how the map based system is able to correct unreliable sensor input to provide more precise boundary estimations. The map based system is also able to provide an up to 36% relative increase in correctly identified objects within ego vehicles lane between 12.5-150 meters in front of ego vehicle. The results indicate the ability to extend the horizon in which driver-assistance functions are able to operate, thus increasing the safety of future autonomous or semi-autonomous vehicles. Future work within the subject is needed to apply map based estimations on urban areas. The precision of such an system also relies on precise positional data. Incorporation of more precise global navigation data would be able to show an increased performance. / Ett fordons förmåga att upptäcka och uppskatta sin omgivning är viktig för att säkerställa fordonets och passagerarnas säkerhet oavsett fordonets autonominivå. Med en förbättrad väg- och körfältsuppskattning kommer avancerade förarassistanssystem att kunna ge tidigare och mer exakta varningar och åtgärder för att förhindra en eventuell olycka. Aktuella estimeringar av körfältsgränser är beroende av kamera och tröghetssensordata för att upptäcka och uppskatta relevanta körfältsgränser i fordonets omgivning. Det nuvarande estimerings-systemet upvisar inkorrekta uppskattningar på avstånd över 75 meter och har en prestanda som påverkas av den omgivande miljön. Metoderna i detta examensarbete visar hur kartdata, tillsammans med sensorfusion av radar, kamera, tröghetsmätenhet och globala satellitnavigeringsdata, kan ge en förbättrad estimering av körfältsgränser. Det kartbaserade systemet är implementerat och utvärderat för höghastighetsvägar (motorvägar och landsvägar) där estimeringar av körfältsgränser för avstånd över 75 meter behövs. Resultaten utförs i en simulerad miljö och visar hur det kartbaserade systemet kan korrigera opålitlig sensorinmatning för att ge mer exakta gränsuppskattningar. Systemet kan också ge en upp till 36% relativ ökning av korrekt identifierade objekt inom ego-fordonets körfält mellan 12.5-150 meter framför ego-fordonet. Resultaten indikerar förmågan att förlänga horisonten som förarassistansfunktioner kan fungera i, vilket ökar säkerheten för framtida autonoma eller halvautonoma fordon. Framtida arbeten inom ämnet behövs för att tillämpa kartbaserade uppskattningar på tätorter. Precisionen hos ett sådant system är också beroende av mer exakt positionsdata. Inkorporering av mer exakt global navigationsdata skulle i detta fall kunna visa en ökad sytemprestanda.
2

An Effective Framework of Autonomous Driving by Sensing Road/motion Profiles

Zheyuan Wang (11715263) 22 November 2021 (has links)
<div>With more and more videos taken from dash cams on thousands of cars, retrieving these videos and searching for important information is a daunting task. The purpose of this work is to mine some key road and vehicle motion attributes in a large-scale driving video data set for traffic analysis, sensing algorithm development and autonomous driving test benchmarks. Current sensing and control of autonomous cars based on full-view identification makes it difficult to maintain a high-frequency with a fast-moving vehicle, since computation is increasingly used to cope with driving environment changes.</div><div><br></div><div>A big challenge in video data mining is how to deal with huge amounts of data. We use a compact representation called the road profile system to visualize the road environment in long 2D images. It reduces the data from each frame of image to one line, thereby compressing the video clip to the image. This data dimensionality reduction method has several advantages: First, the data size is greatly compressed. The data is compressed from a video to an image, and each frame in the video is compressed into a line. The data size is compressed hundreds of times. While the size and dimensionality of the data has been compressed greatly, the useful information in the driving video is still completely preserved, and motion information is even better represented more intuitively. Because of the data and dimensionality reduction, the identification algorithm computational efficiency is higher than the full-view identification method, and it makes the real-time identification on road is possible. Second, the data is easier to be visualized, because the data is reduced in dimensionality, and the three-dimensional video data is compressed into two-dimensional data, the reduction is more conducive to the visualization and mutual comparison of the data. Third, continuously changing attributes are easier to show and be captured. Due to the more convenient visualization of two-dimensional data, the position, color and size of the same object within a few frames will be easier to compare and capture. At the same time, in many cases, the trouble caused by tracking and matching can be eliminated. Based on the road profile system, there are three tasks in autonomous driving are achieved using the road profile images.</div><div><br></div><div>The first application is road edge detection under different weather and appearance for road following in autonomous driving to capture the road profile image and linearity profile image in the road profile system. This work uses naturalistic driving video data mining to study the appearance of roads, which covers large-scale road data and changes. This work excavated a large number of naturalistic driving video sets to sample the light-sensitive area for color feature distribution. The effective road contour image is extracted from the long-time driving video, thereby greatly reducing the amount of video data. Then, the weather and lighting type can be identified. For each weather and lighting condition obvious features are I identified at the edge of the road to distinguish the road edge. </div><div><br></div><div>The second application is detecting vehicle interactions in driving videos via motion profile images to capture the motion profile image in the road profile system. This work uses visual actions recorded in driving videos taken by a dashboard camera to identify this interaction. The motion profile images of the video are filtered at key locations, thereby reducing the complexity of object detection, depth sensing, target tracking and motion estimation. The purpose of this reduction is for decision making of vehicle actions such as lane changing, vehicle following, and cut-in handling.</div><div><br></div><div>The third application is motion planning based on vehicle interactions and driving video. Taking note of the fact that a car travels in a straight line, we simply identify a few sample lines in the view to constantly scan the road, vehicles, and environment, generating a portion of the entire video data. Without using redundant data processing, we performed semantic segmentation to streaming road profile images. We plan the vehicle's path/motion using the smallest data set possible that contains all necessary information for driving.</div><div><br></div><div>The results are obtained efficiently, and the accuracy is acceptable. The results can be used for driving video mining, traffic analysis, driver behavior understanding, etc.</div>

Page generated in 0.1162 seconds