Spelling suggestions: "subject:"bnormal 3methods."" "subject:"bnormal 4methods.""
151 |
Vérification automatique de la protection de la vie privée : entre théorie et pratique / Automated Verification of Privacy in Security Protocols : Back and Forth Between Theory & PracticeHirschi, Lucca 21 April 2017 (has links)
La société de l’information dans laquelle nous vivons repose notamment sur notre capacité à échanger des informations de façon sécurisée. Ces échanges sécurisés sont réalisés au moyen de protocoles cryptographiques. Ils explicitent comment les différents agents communicants doivent se comporter et exploitent des primitives cryptographiques (e.g. chiffrement, signature) pour protéger les échanges. Étant donné leur prédominance et leur importance, il est crucial de s’assurer que ces protocoles accomplissent réellement leurs objectifs. Parmi ces objectifs, on trouve de plus en plus de propriétés en lien avec la vie privée (e.g. anonymat, non-traçabilité). Malheureusement, les protocoles développés et déployés souffrent souvent de défauts de conception entraînant un cycle interminable entre découverte d’attaques et amélioration des protocoles.Pour en sortir, nous prônons l’analyse mécanisée de ces protocoles par des méthodes formelles qui, via leurs fondements mathématiques, permettent une analyse rigoureuse apportant des garanties fortes sur la sécurité attendue. Parmi celles-ci, la vérification dans le modèle symbolique offre de grandes opportunités d’automatisation. La plupart des propriétés en lien avec la vie privée sont alors modélisées par l’équivalence entre deux systèmes. Toutefois, vérifier cette équivalence est indécidable dans le cas général. Deux approches ont alors émergé. Premièrement, pour un nombre borné de sessions d’un protocole, il est possible de symboliquement explorer toutes ses exécutions possibles et d’en déduire des procédures de décision pour l’équivalence. Deuxièmement, il existe des méthodes de semi-décision du problème dans le cas général qui exploitent des abstractions, notamment en considérant une forme forte d’équivalence.Nous avons identifié un problème majeur pour chacune des deux méthodes de l’état de l’art qui limitent grandement leur impact en pratique. Premièrement, les méthodes et outils qui explorent symboliquement les exécutions souffrent de l’explosion combinatoire du nombre d’états, causée par la nature concurrente des systèmes étudiés. Deuxièmenent, dans le cas non borné, la forme forte d’équivalence considérée se trouve être trop imprécise pour vérifier certaines propriétés telle que la non traçabilité, rendant cette approche inopérante pour ces propriétés.Dans cette thèse, nous proposons une solution à chacun des problèmes. Ces solutions prennent la forme de contributions théoriques, mais nous nous efforçons de les mettre en pratique via des implémentations afin de confirmer leurs impacts pratiques qui se révèlent importants.Tout d’abord, nous développons des méthodes de réduction d’ordres partiels pour réduire drastiquement le nombre d’états à explorer tout en s’assurant que l’on ne perd pas d’attaques. Nos méthodes sont conçues pour le cadre exigeant de la sécurité et sont prouvées correctes et complètes vis-à-vis de l’équivalence. Nous montrons comment elles peuvent s’intégrer naturellement dans les outils existants. Nous prouvons la correction de cette intégration dans un outil et proposons une implémentation complète. Finalement, nous mesurons le gain significatif en efficacité ainsi obtenu et nous en déduisons que nos méthodes permettent l’analyse d’un plus grand nombre de protocoles.Ensuite, pour résoudre le problème de précision dans le cas non-borné, nous proposons une nouvelle démarche qui consiste à assurer la vie privée via des conditions suffisantes. Nous définissons deux propriétés qui impliquent systématiquement la non-traçabilité et l’anonymat et qui sont facilement vérifiables via les outils existants (e.g. ProVerif). Nous avons implémenté un nouvel outil qui met en pratique cette méthode résolvant le problème de précision de l’état de l’art pour une large classe de protocoles. Cette nouvelle approche a permis les premières analyses de plusieurs protocoles industriels incluant des protocoles largement déployés, ainsi que la découverte de nouvelles attaques. / The information society we belong to heavily relies on secure information exchanges. To exchange information securely, one should use security protocols that specify how communicating agents should behave notably by using cryptographic primitives (e.g. encryption, signature). Given their ubiquitous and critical nature, we need to reach an extremely high level of confidence that they actually meet their goals. Those goals can be various and depend on the usage context but, more and more often, they include privacy properties (e.g. anonymity, unlinkability). Unfortunately, designed and deployed security protocols are often flawed and critical attacks are regularly disclosed, even on protocols of utmost importance, leading to the never-ending cycle between attack and fix.To break the present stalemate, we advocate the use of formal methods providing rigorous, mathematical frameworks and techniques to analyse security protocols. One such method allowing for a very high level of automation consists in analysing security protocols in the symbolic model and modelling privacy properties as equivalences between two systems. Unfortunately, deciding such equivalences is actually undecidable in the general case. To circumvent undecidability, two main approaches have emerged. First, for a bounded number of agents and sessions of the security protocol to analyse, it is possible to symbolically explore all possible executions yielding decision procedures for equivalence between systems. Second, for the general case, one can semi-decide the problem leveraging dedicated abstractions, notably relying on a strong form of equivalence (i.e. diff-equivalence).The two approaches, i.e. decision for the bounded case or semi-decision for the unbounded case, suffer from two different problems that significantly limit their practical impact. First, (symbolically) exploring all possible executions leads to the so-called states space explosion problem caused by the concurrency nature of security protocols. Concerning the unbounded case, diff-equivalence is actually too imprecise to meaningfully analyse some privacy properties such as unlinkability, nullifying methods and tools relying on it for such cases.In the present thesis, we address those two problems, going back and forth between theory and practice. Practical aspects motivate our work but our solutions actually take the form of theoretical developments. Moreover, we make the effort to confirm practical relevance of our solutions by putting them into practice (implementations) on real-world case studies (analysis of real-world security protocols).First, we have developed new partial order reduction techniques in order to dramatically reduce the number of states to explore without loosing any attack. We design them to be compatible with equivalence verification and such that they can be nicely integrated in frameworks on which existing procedures and tools are based. We formally prove the soundness of such an integration in a tool and provide a full implementation. We are thus able to provide benchmarks showing dramatic speedups brought by our techniques and conclude that more protocols can henceforth be analysed.Second, to solve the precision issue for the unbounded case, we propose a new methodology based on the idea to ensure privacy via sufficient conditions. We present two conditions that always imply unlinkability and anonymity that can be verified using existing tools (e.g. ProVerif). We implement a tool that puts this methodology into practice, hence solving the precision issue for a large class of protocols. This novel approach allows us to conduct the first formal analysis of some real-world protocols (some of them being widely deployed) and the discovery of novel attacks.
|
152 |
Semantics of Strategy Logic / Les choix semantiques dans la Strategy logicGardy, Patrick 12 June 2017 (has links)
De nombreux bugs informatiques ont mis en lumière le besoin de certifier les programmes informatiques et la vérification de programmes a connu un développement important au cours des quarante dernières années. Parmi les méthodes possibles, on trouve le model-checking, développé par Clarke et Emerson dans les années 80. Le model-checking consiste à trouver un modèle abstrait pour le système et un formalisme logique pour le comportement puis à vérifier si le modèle vérifie la propriété exprimée dans la logique. La difficulté consiste alors à développer des algorithmes efficaces pour les différents formalismes. Nous nous intéresserons en particulier au formalisme logique de strategy Logic SL, utilisée sur les systèmes multiagents. SL est particulièrement expressif de par son traitement des stratégies (comportements possibles pour les agents du système) comme des objets du premier ordre. Dans sa définition, divers choix sémantiques sont faits et, bien que ces choix se justifient, d'autres possibilités n'en sont pas plus absurdes: tel ou tel choix donne telle ou telle logique et chacune permet d'exprimer des propriétés différentes. Dans cette thèse, nous étudions les différentes implications des différents choix sémantiques. Nous commencerons par introduire SL et préciserons l'étendue des connaissances actuelles. Nous nous intéresserons ensuite aux possibilités non explorées par la sémantique originale. Nous étudierons aussi la logique sur des systèmes quantitatifs (ajout de contraintes d'énergie et de contraintes de compteurs). Finalement, nous examinerons la question des dépendances dans SL[BG] (un fragment de SL). / With the proliferation of computerised devices, software verification is more prevalent than ever. Since the 80's, multiple costly software failures have forced both private and public actors to invest in software verification. Among the main procedures we find the model-checking, developed by Clarke and Emerson in the 80's. It consists in abstracting both the system into a formal model and the property of expected behaviour in some logical formalism, then checking if the property's abstraction holds on the system's abstraction. The difficulty lies in finding appropriate models and efficient algorithms. In this thesis, we focus on one particular logical formalism: the Strategy Logic SL, used to express multi-objectives properties of multi-agents systems. Strategy Logic is a powerful and expressive formalism that treats strategies (i.e. potential behaviours of the agents) like first-order objects. It can be seen as an analogue to first-order logic for multi-agents systems. Many semantic choices were made in its definition without much discussion. Our main contributions are relative to the possibilities left behind by the original definition. We first introduce SL and present some complexity results (including some of our owns). We then outline some other semantic choices within SL's definition and study their influence. Third, we study the logic's behaviour under quantitative multi-agents systems (games with energy and counter constraints). Finally, we address the problem of dependencies within SL[BG], a fragment of SL.
|
153 |
Applying Formal Methods to Autonomous Vehicle Control / Application des méthodes formelles au contrôle du véhicule autonomeDuplouy, Yann 26 November 2018 (has links)
Cette thèse s'inscrit dans le cadre de la conception de véhicules autonomes, et plus spécifiquement de la vérification de contrôleurs de tels véhicules. Nos contributions à la résolution de ce problème sont les suivantes : (1) fournir une syntaxe et une sémantique pour un modèle de systèmes hybrides, (2) étendre les fonctionnalités du model checker statistique Cosmos à ce modèle et (3) valider empiriquement la pertinence de notre approche sur des cas d'étude typiques du véhicule autonome.Nous avons choisi de combiner le modèle des réseaux de Petri stochastiques de haut niveau (qui était le formalisme d'entrée de Cosmos) avec le formalisme d'entrée de Simulink afin d'atteindre un pouvoir d'expression suffisant. En effet Simulink est très largement utilisé dans le domaine automobile et de nombreux contrôleurs sont spécifiés avec cet outil. Or Simulink n'a pas de sémantique formellement définie. Ceci nous a conduit à concevoir une telle sémantique en deux temps : tout d'abord en introduisant une sémantique dite exacte mais qui n'est pas opérationnelle puis en la complétant par une sémantique approchée intégrant le facteur d'approximation recherché.Afin de combiner le modèle à événements discrets des réseaux de Petri et le modèle continu spécifié en Simulink, nous avons proposé au niveau syntaxique une interfacereposant sur de nouveaux types de transitions et au niveau sémantique une extension de la boucle de simulation. L'évaluation de ce nouveau formalisme a été entièrement implémentée dans Cosmos.Grace à ce nouveau formalisme, nous avons développé et étudié les deux cas d'étude suivants : d'une part une circulation dense sur une section d'autoroute et d'autre part l'insertion du véhicule dans une voie rapide. L'analyse des modélisations correspondantes a démontré la pertinence de notre approche. / This thesis takes place in the context of autonomous vehicle design, and concerns more specifically the verification of controllers of such vehicles. Our contributions are the following: (1) give a syntax and a semantics for a hybrid system model, (2) extend the capacities of the model-checker Cosmos to that kind of models, and (3) empirically confirm the relevance of our approach on typical case studies handling autonomous vehicles.We chose to combine high-level stochastic Petri nets (which is the input formalism of Cosmos) with the input formalism of Simulink, to obtain an adequate expressive power. Indeed, Simulink is largely used in the automotive industry and numerous controllers have been specified using this tool. However, there is no formal semantics for Simulink, which lead us to define such a semantics in two steps:first, we propose an exact (but not operational) semantics, then we complete it by an approximate semantics that includes the targeted approximation level.In order to combine the discrete event model of Petri nets and the continous model specified in Simulink, we define a syntactic interface that relies on new transition types; its semantics consists of an extension of the simulation loop. The evaluation of this new formalism has been entirely implemented into Cosmos.Using this new formalism, we have designed and studied the two following case studies: on one hand, a heavy traffic on a motorway segment, and on the other hand the insertion of a vehicle into a motorway. Our approach has been validated by the analysis of the corresponding models.
|
154 |
Vérification et validation de politiques de contrôle d'accès dans le domaine médical / Verification and validation of healthcare access control policiesHuynh, Nghi 06 December 2016 (has links)
Dans le domaine médical, la numérisation des documents et l’utilisation des dossiers patient électroniques (DPE, ou en anglais EHR pour Electronic Health Record) offrent de nombreux avantages, tels que le gain de place ou encore la facilité de recherche et de transmission de ces données. Les systèmes informatiques doivent reprendre ainsi progressivement le rôle traditionnellement tenu par les archivistes, rôle qui comprenait notamment la gestion des accès à ces données sensibles. Ces derniers doivent en effet être rigoureusement contrôlés pour tenir compte des souhaits de confidentialité des patients, des règles des établissements et de la législation en vigueur. SGAC, ou Solution de Gestion Automatisée du Consentement, a pour but de fournir une solution dans laquelle l’accès aux données du patient serait non seulement basée sur les règles mises en place par le patient lui-même mais aussi sur le règlement de l’établissement et sur la législation. Cependant, cette liberté octroyée au patient est source de divers problèmes : conflits, masquage des données nécessaires aux soins ou encore tout simplement erreurs de saisie. C’est pour cela que la vérification et la validation des règles d’accès sont cruciales : pour effectuer ces vérifications, les méthodes formelles fournissent des moyens fiables de vérification de propriétés tels que les preuves ou la vérification de modèles.Cette thèse propose des méthodes de vérification adaptées à SGAC pour le patient : elle introduit le modèle formel de SGAC, des méthodes de vérifications de propriétés telles l’accessibilité aux données ou encore la détection de document inaccessibles. Afin de mener ces vérifications de manière automatisée, SGAC est modélisé en B et Alloy ; ces différentes modélisations donnent accès aux outils Alloy et ProB, et ainsi à la vérification automatisée de propriétés via la vérification de modèles ou model checking / In healthcare, data digitization and the use of the Electronic Health Records (EHR) offer several benefits, such as reduction of the space occupied by data, or the ease of data search or data exchanges. IT systems must gradually act as the archivists who manage the access over sensitive data. Those have to be checked to be consistent with patient privacy wishes, hospital rules, and laws and regulations.SGAC, or Solution de Gestion Automatisée du Consentement, aims to offer a solution in which access to patient data would be based on patient rules, hospital rules and laws. However, the freedom granted to the patient can cause several problems: conflicts, hiding of the needed data to heal the patient or simply data-capture error. Therefore, verification and validation of policies are crucial: to conduct this verification, formal methods provide reliable ways to verify properties like proofs or model checking.This thesis provides verification methods applied on SGAC for the patient: it introduces the formal model of SGAC, verification methods of properties such as data reachability or hidden data detection. To conduct those verification in an automated way, SGAC is modelled in B and Alloy; these different models provide access to the tools Alloy and ProB, and thus, automated property verification with model checking
|
155 |
Optimal control and learning for safety-critical autonomous systemsXiao, Wei 27 September 2021 (has links)
Optimal control of autonomous systems is a fundamental and challenging problem, especially when many stringent safety constraints and tight control limitations are involved such that solutions are hard to determine. It has been shown that optimizing quadratic costs while stabilizing affine control systems to desired (sets of) states subject to state and control constraints can be reduced to a sequence of Quadratic Programs (QPs) by using Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs). Although computationally efficient, this method is limited by several factors which are addressed in this dissertation.
The first contribution of this dissertation is to extend CBFs to high order CBFs (HOCBFs) that can accommodate arbitrary relative degree systems and constraints. The satisfaction of Lyapunov-like conditions in the HOCBF method implies the forward invariance of the intersection of a sequence of sets, which can then guarantee the satisfaction of the original safety constraint. Second, under tight control bounds, this dissertation proposes an analytical method to find sufficient conditions that guarantee the QP feasibility. The sufficient conditions are captured by a single state constraint that is enforced by a CBF and then added to the QP. Third, for complex safety constraints and systems in which it is hard to find sufficient conditions for feasibility, machine learning techniques are employed to learn the definitions of HOCBFs or feasibility constraints. Fourth, when time-varying control bounds and noisy dynamics are involved, adaptive CBFs (AdaCBFs) are proposed, which can guarantee the feasibility of the QPs if the original optimization problem itself is feasible. Finally, for systems with unknown dynamics, adaptive affine control dynamics are proposed to approximate the real unmodelled system dynamics which are updated based on the error states obtained by real-time sensor measurements. A set of events required to trigger a solution of the QP in order to guarantee safety is defined, and a condition that guarantees the satisfaction of the HOCBF constraint between events is derived.
In order to address the myopic nature of the CBF method, a real-time control framework that combines optimal trajectories and the computationally efficient HOCBF method providing safety guarantees is also proposed. The HOCBFs and CLFs are used to account for constraints with arbitrary relative degrees and to track the optimal state, respectively. Eventually, an optimal control problem based on the proposed framework is always reduced to a sequence of QPs regardless of the formulation of the original cost function. Another contribution of the dissertation is to apply the above proposed methods to solve complex safety-critical optimal control problems, such as those arising in rule-based autonomous driving and optimal traffic merging control for Connected and Automated Vehicles (CAVs).
|
156 |
Interpretation, Verification and Privacy Techniques for Improving the Trustworthiness of Neural NetworksDethise, Arnaud 22 March 2023 (has links)
Neural Networks are powerful tools used in Machine Learning to solve complex problems across many domains, including biological classification, self-driving cars, and automated management of distributed systems. However, practitioners' trust in Neural Network models is limited by their inability to answer important questions about their behavior, such as whether they will perform correctly or if they can be entrusted with private data.
One major issue with Neural Networks is their "black-box" nature, which makes it challenging to inspect the trained parameters or to understand the learned function. To address this issue, this thesis proposes several new ways to increase the trustworthiness of Neural Network models.
The first approach focuses specifically on Piecewise Linear Neural Networks, a popular flavor of Neural Networks used to tackle many practical problems. The thesis explores several different techniques to extract the weights of trained networks efficiently and use them to verify and understand the behavior of the models. The second approach shows how strengthening the training algorithms can provide guarantees that are theoretically proven to hold even for the black-box model.
The first part of the thesis identifies errors that can exist in trained Neural Networks, highlighting the importance of domain knowledge and the pitfalls to avoid with trained models. The second part aims to verify the outputs and decisions of the model by adapting the technique of Mixed Integer Linear Programming to efficiently explore the possible states of the Neural Network and verify properties of its outputs. The third part extends the Linear Programming technique to explain the behavior of a Piecewise Linear Neural Network by breaking it down into its linear components, generating model explanations that are both continuous on the input features and without approximations. Finally, the thesis addresses privacy concerns by using Trusted Execution and Differential Privacy during the training process.
The techniques proposed in this thesis provide strong, theoretically provable guarantees about Neural Networks, despite their black-box nature, and enable practitioners to verify, extend, and protect the privacy of expert domain knowledge. By improving the trustworthiness of models, these techniques make Neural Networks more likely to be deployed in real-world applications.
|
157 |
Machine Assisted Reasoning for Multi-Threaded Java Bytecode / Datorstödda resonemang om multi-trådad Java-bytekodLagerkvist, Mikael Zayenz January 2005 (has links)
In this thesis an operational semantics for a subset of the Java Virtual Machine (JVM) is developed and presented. The subset contains standard operations such as control flow, computation, and memory management. In addition, the subset contains a treatment of parallel threads of execution. The operational semantics are embedded into a $µ$-calculus based proof assistant, called the VeriCode Proof Tool (VCPT). VCPT has been developed at the Swedish Institute of Computer Science (SICS), and has powerful features for proving inductive assertions. Some examples of proving properties of programs using the embedding are presented. / I det här examensarbetet presenteras en operationell semantik för en delmängd av Javas virtuella maskin. Den delmängd som hanteras innehåller kontrollflöde, beräkningar och minneshantering. Vidare beskrivs semantiken för parallella exekveringstrådar. Den operationella semantiken formaliseras i en bevisassistent for $µ$-kalkyl, VeriCode Proof Tool (VCPT). VCPT har utvecklats vid Swedish Institiute of Computer Science (SICS), och har kraftfulla tekniker för att bevisa induktiva påståenden. Några exempel på bevis av egenskaper hos program användandes formaliseringen presenteras också.
|
158 |
Mathematical Modelling of Delegation in Role Based Access ControlSubedi, Harendra January 2017 (has links)
One of the most widespread access control model that assigns permissions to a user is Role Based Access Control (RBAC). The basic idea is to limit the access to resources by using the indirection of roles, which are associated both to users and permissions. There has been research conducted with respect to clarifying RBAC and its components, as well as in creating mathematical models describing different aspects of its administrative issues in RBAC. But, till date no work has been done in terms of formalization (Mathematical Modelling) of delegation and revocation of roles in RBAC. Which provides most important extensions of the policy and provides flexibility in the user to user delegation of roles, especially in the environment where roles are organized in a hierarchy. The process allows a user with a role that is higher in the hierarchy to assign a part of the role to someone who is lower in the hierarchy or at the same level. This can be done for a limited time or permanently. The reverse process is called revocation and it consists of ending different types of delegations. This thesis has found the answer to the following research question i.e. how different mathematical Modelling for delegation and revocation of Roles in RBAC can be performed? This thesis presents different types of delegation and techniques for revocation with a comprehensive mathematical Modelling of both processes. As this can be clearly visible that this thesis objective is to derive a mathematical models for delegation and revocation of roles in RBAC policy, for deriving mathematical models formal method is applied. The mathematical models developed include grant and transfer delegation with and without role hierarchy, time based revocation, user based revocation and cascading revocation. The case scenario of an organization using RBAC is used to illustrate and clarify the mathematical models. The mathematical models presented here can serve as a starting point for developing, implementations of delegation and revocation on top of existing authorization modules based on the RBAC model.
|
159 |
Assumption-Based Runtime Verification of Finite- and Infinite-State SystemsTian, Chun 23 November 2022 (has links)
Runtime Verification (RV) is usually considered as a lightweight automatic verification technique for the dynamic analysis of systems, where a monitor observes executions produced by a system and analyzes its executions against a formal specification. If the monitor were synthesized, in addition to the monitoring specification, also from extra assumptions on the system behavior (typically described by a model as transition systems), then it may output more precise verdicts or even be predictive, meanwhile it may no longer be lightweight, since monitoring under assumptions has the same computation complexity with model checking. When suitable assumptions come into play, the monitor may also support partial observability, where non-observable variables in the specification can be inferred from observables, either present or historical ones. Furthermore, the monitors are resettable, i.e. being able to evaluate the specification at non-initial time of the executions while keeping memories of the input history. This helps in breaking the monotonicity of monitors, which, after reaching conclusive verdicts, can still change its future outputs by resetting its reference time. The combination of the above three characteristics (assumptions, partial observability and resets) in the monitor synthesis is called the Assumption-Based Runtime Verification, or ABRV. In this thesis, we give the formalism of the ABRV approach and a group of monitoring algorithms based on specifications expressed in Linear Temporal Logic with both future and past operators, involving Boolean and possibly other types of variables. When all involved variables have finite domain, the monitors can be synthesized as finite-state machines implemented by Binary Decision Diagrams. With infinite-domain variables, the infinite-state monitors are based on satisfiability modulo theories, first-order quantifier elimination and various model checking techniques. In particular, Bounded Model Checking is modified to do its work incrementally for efficiently obtaining inconclusive verdicts, before IC3-based model checkers get involved. All the monitoring algorithms in this thesis are implemented in a tool called NuRV. NuRV support online and offline monitoring, and can also generate standalone monitor code in various programming languages. In particular, monitors can be synthesized as SMV models, whose behavior correctness and some other properties can be further verified by model checking.
|
160 |
Trusted Unmanned Aerial System OperationsTheyyar Maalolan, Lakshman 03 June 2020 (has links)
Proving the correctness of autonomous systems is challenged by the use of non-deterministic artificial intelligence algorithms and ever-increasing lines of code. While correctness is conventionally determined through analysis and testing, it is impossible to train and test the system for all possible scenarios or formally analyze millions of lines of code. This thesis describes an alternative method that monitors system behavior during runtime and executes a recovery action if any formally specified property is violated. Multiple parallel safety monitors synthesized from linear temporal logic (LTL) formulas capturing the correctness and liveness properties are implemented in isolated configurable hardware to avoid negative impacts on the system performance. Model checking applied to the final implementation establishes the correctness of the last line of defense against malicious attacks and software bugs. The first part of this thesis illustrates the monitor synthesis flow with rules defining a three-dimensional cage for a commercial-off-the-shelf drone and demonstrates the effectiveness of the monitoring system in enforcing strict behaviors. The second part of this work defines safety monitors to provide assurances for a virtual autonomous flight beyond visual line of sight. Distinct sets of monitors are called into action during different flight phases to monitor flight plan conformance, stability, and airborne collision avoidance. A wireless interface supported by the proposed architecture enables the configuration of monitors, thereby eliminating the need to reprogram the FPGA for every flight. Overall, the goal is to increase trust in autonomous systems as demonstrated with two common drone operations. / Master of Science / Software code in autonomous systems, like cars, drones, and robots, keeps growing not just in length, but also in complexity. The use of machine learning and artificial intelligence algorithms to make decisions could result in unexpected behaviors when encountering completely new situations. Traditional methods of verifying software encounter difficulties while establishing the absolute correctness of autonomous systems. An alternative to proving correctness is to enforce correct behaviors during execution. The system's inputs and outputs are monitored to ensure adherence to formally stated rules. These monitors, automatically generated from rules specified as mathematical formulas, are isolated from the rest of the system and do not affect the system performance. The first part of this work demonstrates the feasibility of the approach by adding monitors to impose a virtual cage on a commercially available drone. The second phase of this work extends the idea to a simulated autonomous flight with a predefined set of points that the drone must pass through. These points along with the necessary parameters for the monitors can be uploaded over Bluetooth. The position, speed, and distance to nearby obstacles are independently monitored and a recovery action is executed if any rule is violated. Since the monitors do not assume anything about the source of the violations, they are effective against malicious attacks, software bugs, and sensor failures. Overall, the goal is to increase confidence in autonomous systems operations.
|
Page generated in 0.0736 seconds