Machine learning is a powerful method to handle the self-driving problem. Researchers use machine learning to construct a neural network and train it to drive the car. A self-driving car is a safety-critical system. However, the neural network is not necessarily reliable. The output of a neural network can be easily influenced by many factors, such as the quality of training data and the runtime environment. Also, it takes time for the neural network to generate the output. That is, the self-driving car may not respond in time. Such weaknesses will increase the risk of accidents. In this thesis, considering the safety of self-driving cars, we apply a delay-aware shielding mechanism to the neural network to protect the self-driving car. Our approach is an improvement based on previous research on runtime safety enforcement for general cyber-physical systems that did not consider the delay to generate the output. Our approach contains two steps. The first is to use formal language to specify the safety properties of the system. The second step is to synthesize the specifications into a delay-aware enforcer called the shield, which enforces the violated output to satisfy the specifications during the whole delay. We use a lane keeping system as a small but representative case study to evaluate our approach. We utilize an end-to-end neural network as a typical implementation of such a lane keeping system. Our shield supervises those outputs of the neural network and verifies the safety properties during the whole delay period with a prediction. The shield can correct it if a violation exists. We use a 1/16 scale truck and construct a curvy lane to test our approach. We conduct the experiments both on a simulator and a real road to evaluate the performance of our proposed safety mechanism. The result shows the effectiveness of our approach. We improve the safety of a self-driving car and we will consider more comprehensive driving scenarios and safety features in the future. / Master of Science / Self-driving cars is a hot topic nowadays. Machine learning is a popular method to achieve self-driving cars. Machine learning constructs a neural network, which imitates a human driver's behavior to drive the car. However, a neural network is not necessarily reliable. Many things can mislead the neural network into making wrong decisions, such as insufficient training data or a complex driving environment. Thus, we need to guarantee the safety of self-driving cars. We are inspired to use formal language to specify the safety properties of the self-driving system. A system should always follow those specifications. Then the specifications are synthesized into an enforcer called the shield. When the system's output violates the specifications, the shield will modify the output to satisfy the specifications. Nevertheless, there is a problem with state-of-the-art research on specifications. When the specifications are synthesized into a shield, it does not consider the delay to compute the output. As a result, the specifications may not be always satisfied during the period of the delay. To solve such a problem, we propose a delay-aware shielding mechanism to continually protect the self-driving system. We use a lane keeping system as a small self-driving case study. We evaluate the effectiveness of our approach both on the simulation platform and the hardware platform. The experiments show that the safety of our self-driving car is enhanced. We intend to study more comprehensive driving scenarios and safety features in the future.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/99292 |
Date | 07 July 2020 |
Creators | Xu, Hao |
Contributors | Electrical and Computer Engineering, Zeng, Haibo, Hsiao, Michael S., Abbott, A. Lynn |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0058 seconds