Return to search

Trusted Unmanned Aerial System Operations

Proving the correctness of autonomous systems is challenged by the use of non-deterministic artificial intelligence algorithms and ever-increasing lines of code. While correctness is conventionally determined through analysis and testing, it is impossible to train and test the system for all possible scenarios or formally analyze millions of lines of code. This thesis describes an alternative method that monitors system behavior during runtime and executes a recovery action if any formally specified property is violated. Multiple parallel safety monitors synthesized from linear temporal logic (LTL) formulas capturing the correctness and liveness properties are implemented in isolated configurable hardware to avoid negative impacts on the system performance. Model checking applied to the final implementation establishes the correctness of the last line of defense against malicious attacks and software bugs. The first part of this thesis illustrates the monitor synthesis flow with rules defining a three-dimensional cage for a commercial-off-the-shelf drone and demonstrates the effectiveness of the monitoring system in enforcing strict behaviors. The second part of this work defines safety monitors to provide assurances for a virtual autonomous flight beyond visual line of sight. Distinct sets of monitors are called into action during different flight phases to monitor flight plan conformance, stability, and airborne collision avoidance. A wireless interface supported by the proposed architecture enables the configuration of monitors, thereby eliminating the need to reprogram the FPGA for every flight. Overall, the goal is to increase trust in autonomous systems as demonstrated with two common drone operations. / Master of Science / Software code in autonomous systems, like cars, drones, and robots, keeps growing not just in length, but also in complexity. The use of machine learning and artificial intelligence algorithms to make decisions could result in unexpected behaviors when encountering completely new situations. Traditional methods of verifying software encounter difficulties while establishing the absolute correctness of autonomous systems. An alternative to proving correctness is to enforce correct behaviors during execution. The system's inputs and outputs are monitored to ensure adherence to formally stated rules. These monitors, automatically generated from rules specified as mathematical formulas, are isolated from the rest of the system and do not affect the system performance. The first part of this work demonstrates the feasibility of the approach by adding monitors to impose a virtual cage on a commercially available drone. The second phase of this work extends the idea to a simulated autonomous flight with a predefined set of points that the drone must pass through. These points along with the necessary parameters for the monitors can be uploaded over Bluetooth. The position, speed, and distance to nearby obstacles are independently monitored and a recovery action is executed if any rule is violated. Since the monitors do not assume anything about the source of the violations, they are effective against malicious attacks, software bugs, and sensor failures. Overall, the goal is to increase confidence in autonomous systems operations.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/98735
Date03 June 2020
CreatorsTheyyar Maalolan, Lakshman
ContributorsElectrical and Computer Engineering, Patterson, Cameron D., Hsiao, Michael S., Min, Chang Woo
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0019 seconds