1 |
Riding an e-scooter at nighttime is more dangerous than at daytimeShah, Nitesh R., Cherry, Christopher R. 28 December 2022 (has links)
With rapidly increasing e-scooter usage in the United States [1], a growing number of studies aim to understand the safety aspect of these emerging modes. The existing literature has a limited understanding of time-of-day and seasonal patterns of e-scooter crashes. While many e-scooter safety policies are based on the number of crashes [2, 3], accounting for exposure provides a measure of risk to inform effective preventive strategies [4]. This study focuses on motor-vehicle involved crashes since they constitute the most severe and fatal injuries. We compared daytime and nighttime motor-vehicle involved e-scooter crashes and combined them with micromobility trip data to generate exposure variables and estimate crash risk. The key research question of this paper is as follows: 1. Are crashes or crash rates disproportionately higher at night than in the day? [From: Introduction]
|
2 |
E-scooter Rider Detection System in Driving EnvironmentsApurv, Kumar 08 1900 (has links)
Indianapolis / E-scooters are ubiquitous and their number keeps escalating, increasing their interactions with other vehicles on the road. E-scooter riders have an atypical behavior that varies enormously from other vulnerable road users, creating new challenges for vehicle active safety systems and automated driving functionalities. The detection of e-scooter riders by other vehicles is the first step in taking care of the risks. This research presents a novel vision-based system to differentiate between e-scooter riders and regular pedestrians and a benchmark dataset for e-scooter riders in natural environments. An efficient system pipeline built using two existing state-of-the-art convolutional neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2, performs detection of these vulnerable e-scooter riders.
|
3 |
E-scooter Rider Detection System in Driving EnvironmentsKumar Apurv (11184732) 06 August 2021 (has links)
E-scooters are ubiquitous and their number keeps escalating, increasing their interactions with other vehicles on the road. E-scooter riders have an atypical behavior that varies enormously from other vulnerable road users, creating new challenges for vehicle active safety systems and automated driving functionalities. The detection of e-scooter riders by other vehicles is the first step in taking care of the risks. This research presents a novel vision-based system to differentiate between e-scooter riders and regular pedestrians and a benchmark dataset for e-scooter riders in natural environments. An efficient system pipeline built using two existing state-of-the-art convolutional neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2, performs detection of these vulnerable e-scooter riders.<br>
|
4 |
Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3D LIDAR and Multi-Camera SetupSiddhant Srinath Betrabet (9708467) 07 January 2021 (has links)
<div><p>Analyzing
behaviors of objects on the road is a complex task that requires data from
various sensors and their fusion to recreate movement of objects with a high
degree of accuracy. A data collection and processing system are thus needed to
track the objects accurately in order to make an accurate and clear map of the
trajectories of objects relative to various coordinate frame(s) of interest in
the map. Detection and tracking moving objects (DATMO) and Simultaneous
localization and mapping (SLAM) are the tasks that needs to be achieved in
conjunction to create a clear map of the road comprising of the moving and
static objects.</p>
<p> These computational problems are commonly
solved and used to aid scenario reconstruction for the objects of interest. The
tracking of objects can be done in various ways, utilizing sensors such as
monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as
well as Inertial Navigation systems (INS) systems. One relatively common method
for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple
monocular cameras in conjunction with an inertial measurement unit (IMU) allows
for redundancies to maintain object classification and tracking with the help
of sensor fusion in cases when sensor specific traditional algorithms prove to
be ineffectual when either sensor falls short due to their limitations. The
usage of the IMU and sensor fusion methods relatively eliminates the need for
having an expensive INS rig. Fusion of these sensors allows for more effectual
tracking to utilize the maximum potential of each sensor while allowing for
methods to increase perceptional accuracy.
</p>
<p>The
focus of this thesis will be the dock-less e-scooter and the primary goal will
be to track its movements effectively and accurately with respect to cars on
the road and the world. Since it is relatively more common to observe a car on
the road than e-scooters, we propose a data collection system that can be built
on top of an e-scooter and an offline processing pipeline that can be used to
collect data in order to understand the behaviors of the e-scooters themselves.
In this thesis, we plan to explore a data collection system involving a 3D
LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well
as an offline method for processing the data to generate data to aid scenario
reconstruction. </p><br></div>
|
Page generated in 0.0598 seconds