1 |
Detection and intention prediction of pedestrians in zebra crossingsVarytimidis, Dimitrios January 2018 (has links)
Behavior of pedestrians who are moving or standing still close to the street could be one of the most significant indicators about pedestrian’s instant future actions. Being able to recognize the activity of a pedestrian, can reveal significant information about pedestrian’s crossing intentions. Thus, the scope of this thesisis to investigate ways and methods to improve understanding ofpedestrian´s activity and in particular detecting their motion and head orientation in relation to the surrounding traffic. Furthermore, different features and methods are examined, used and assessed according to their contribution on distinguishing between different actions. Feature extraction methods considered are Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP) and Convolutional Neural Networks (CNNs). The features are extracted by processing still images of pedestrians from the Joint Attention for Autonomous Driving (JAAD) dataset. The images are extracted from video frames depicting pedestrians walking next to the road or crossing the road are used. Based on the features, a number of Machine Learning (ML) techniques(CNN, Artificial Neural Networks, Support Vector Machines, K-Nearest Neighbor and Decision Trees) are used to predict the head orientation, motion as well as the intention of the pedestrian. The work is divided into three parts, the first is to combine feature extraction and ML to predict pedestrian’s action regarding if they are walking or not. The second is to identify the pedestrian's head orientation in terms of if he/she is looking at the vehicle or not, this is also done by combining feature extraction and ML. The final task is to combine these two measures in a ML-basedclassifier that is trained to predict the pedestrian´s crossing intention and action. In addition to the pedestrian’s behavior for estimating the crossing intention, additional features about the local environment were added as input signals for the classifier, for instance, information about the presence of zebra markings in the street, the location of the scene, and weather conditions.
|
2 |
Modeling Spatiotemporal Pedestrian-Environment Interactions for Predicting Pedestrian Crossing Intention from the Ego-ViewChen Chen (11014800) 06 August 2021 (has links)
<div>
<div>
<div>
<p>For pedestrians and autonomous vehicles (AVs) to co-exist harmoniously and safely in
the real-world, AVs will need to not only react to pedestrian actions, but also anticipate
their intentions. In this thesis, we propose to use rich visual and pedestrian-environment
interaction features to improve pedestrian crossing intention prediction from the ego-view.
We do so by combining visual feature extraction, graph modeling of scene objects and their
relationships, and feature encoding as comprehensive inputs for an LSTM encoder-decoder
network.
</p>
<p>Pedestrians react and make decisions based on their surrounding environment, and the
behaviors of other road users around them. The human-human social relationship has already been explored for pedestrian trajectory prediction from the bird’s eye view in stationary
cameras. However, context and pedestrian-environment relationships are often missing in
current research into pedestrian trajectory, and intention prediction from the ego-view. To
map the pedestrian’s relationship to its surrounding objects we use a star graph with the
pedestrian in the center connected to all other road objects/agents in the scene. The pedestrian and road objects/agents are represented in the graph through visual features extracted
using state of the art deep learning algorithms. We use graph convolutional networks, and
graph autoencoders to encode the star graphs in a lower dimension. Using the graph en-
codings, pedestrian bounding boxes, and human pose estimation, we propose a novel model
that predicts pedestrian crossing intention using not only the pedestrian’s action behaviors
(bounding box and pose estimation), but also their relationship to their environment.
</p>
<p>Through tuning hyperparameters, and experimenting with different graph convolutions
for our graph autoencoder, we are able to improve on the state of the art results. Our context-
driven method is able to outperform current state of the art results on benchmark dataset
Pedestrian Intention Estimation (PIE). The state of the art is able to predict pedestrian
crossing intention with a balanced accuracy (to account for dataset imbalance) score of 0.61,
while our best performing model has a balanced accuracy score of 0.79. Our model especially
outperforms in no crossing intention scenarios with an F1 score of 0.56 compared to the state
of the art’s score of 0.36. Additionally, we also experiment with training the state of the art model and our model to predict pedestrian crossing action, and intention jointly. While
jointly predicting crossing action does not help improve crossing intention prediction, it is
an important distinction to make between predicting crossing action versus intention.</p>
</div>
</div>
</div>
|
3 |
Vehicle Action Intention Prediction in an Uncontrolled Traffic SituationWang, Yijun January 2024 (has links)
Vehicle Action Intention Prediction plays a more and more crucial role in automated driving and traffic safety. It allows automated vehicles to comprehend the other road participants’ current actions, and foresee the upcoming actions, which can significantly reduce the likelihood of traffic accidents, so as to enhance overall road safety. Meanwhile, by anticipating other vehicles’ movements on the road, the ego vehicle can plan its velocity and trajectory in advance, and make more smooth and finer adjustments during the whole driving process, contributing to a more safe and efficient traffic. Furthermore, the intention prediction enables vehicles to respond proactively rather than reactively in traditional ADAS (Advanced Driver Assistance Systems), such as AEB (Automatic Emergency Braking), which facilitates a more preventive and early intervention approach to traffic safety. In normal conditions, traffic behavior is controlled by traffic rules. This thesis explores vehicle behavior using intention prediction models in scenarios where there are no traffic rules. At hand, we have a unique dataset containing vehicle trajectories, collected from 2 cameras installed overhead on a 1-lane narrowing street, where the vehicles need to negotiate their right of way. After pre-processing these data to form specific input structures, we use different classifier models including both traditional methods and deep learning methods to make vehicle action intention predictions. The data was organized in 3-second windows and contained vehicle position and distance to the center of the intersection along with the speed of both vehicles. We compared and evaluated the model performances and found that MLPs (Multi-Layer Perceptrons) and LSTM (Long Short Term Memory) yield the best performance. Furthermore, a feature selection method and features’ importance analysis are also applied to explore which variables influence the model most in order to explain the internal principle of the prediction model. It was found that close to the narrowing street the first and last samples of the position and distance in the time window and the last sample of the speed of both vehicles were found to influence the model performance the most. Further away from the narrowing street, the first and last samples of the position of the vehicle have a higher influence on the model.
|
4 |
VR-BASED TESTING BED FOR PEDESTRIAN BEHAVIOR PREDICTION ALGORITHMSFaria Armin (16279160) 30 August 2023 (has links)
<p>Upon introducing semi- and fully automated vehicles on the road, drivers will be reluctant to focus on the traffic interaction and rely on the vehicles' decision-making. However, encountering pedestrians still poses a significant difficulty for modern automated driving technologies. Considering the high-level complexity in human behavior modeling to solve a real-world problem, deep-learning algorithms trained from naturalistic data have become promising solutions. Nevertheless, although developing such algorithms is achievable based on scene data collection and driver knowledge extraction, evaluation remains challenging due to the potential crash risks and limitations in acquiring ground-truth intention changes. </p>
<p><br></p>
<p>This study proposes a VR-based testing bed to evaluate real-time pedestrian intention algorithms as VR simulators are recognized for their affordability and adaptability in producing a variety of traffic situations, and it is more reliable to conduct human-factor research in autonomous cars. The pedestrian wears the head-mounted headset or uses the keyboard input and makes decisions in accordance with the circumstances. The simulator has added a credible and robust experience, essential for exhibiting the real-time behavior of the pedestrian. While crossing the road, there exists uncertainty associated with pedestrian intention. Our simulator will anticipate the crossing intention with consideration of the ambiguity of the pedestrian behavior. The case study has been performed over multiple subjects in several crossing conditions based on day-to-day life activities. It can be inferred from the study outcomes that the pedestrian intention can be precisely inferred using this VR-based simulator. However, depending on the speed of the car and the distance between the vehicle and the pedestrian, the accuracy of the prediction can differ considerably in some cases.</p>
|
Page generated in 0.165 seconds