Spelling suggestions: "subject:"bounded"" "subject:"abounded""
181 |
On Reducing the Trusted Computing Base in Binary VerificationAn, Xiaoxin 15 June 2022 (has links)
The translation of binary code to higher-level models has wide applications, including decompilation, binary analysis, and binary rewriting. This calls for high reliability of the underlying trusted computing base (TCB) of the translation methodology. A key challenge is to reduce the TCB by validating its soundness. Both the definition of soundness and the validation method heavily depend on the context: what is in the TCB and how to prove it. This dissertation presents three research contributions. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB.
The first contribution targets the validation of OCaml-to-PVS translation -- commonly used to translate instruction-set-architecture (ISA) specifications to PVS -- where the destination language is non-executable. We present a methodology called OPEV to validate the translation between OCaml and PVS, supporting non-executable semantics. The validation includes generating large-scale tests for OCaml implementations, generating test lemmas for PVS, and generating proofs that automatically discharge these lemmas. OPEV incorporates an intermediate type system that captures a large subset of OCaml types, employing a variety of rules to generate test cases for each type. To prove the PVS lemmas, we develop automatic proof strategies and discharge the test lemmas using PVS Proof-Lite, a powerful proof scripting utility of the PVS verification system. We demonstrate our approach in two case studies that include 259 functions selected from the Sail and Lem libraries. For each function, we generate thousands of test lemmas, all of which are automatically discharged.
The dissertation's second contribution targets the soundness validation of a disassembly process where the source language does not have well-defined semantics.
Disassembly is a crucial step in binary security, reverse engineering, and binary verification. Various studies in these fields use disassembly tools and hypothesize that the reconstructed disassembly is correct. However, disassembly is an undecidable problem. State-of-the-art disassemblers suffer from issues ranging from incorrectly recovered instructions to incorrectly assessing which addresses belong to instructions and which to data. We present DSV, a systematic and automated approach to validate whether the output of a disassembler is sound with respect to the input binary. No source code, debugging information, or annotations are required. DSV defines soundness using a transition relation defined over concrete machine states: a binary is sound if, for all addresses in the binary that can be reached from the binary's entry point, the bytes of the (disassembled) instruction located at an address are the same as the actual bytes read from the binary. Since computing this transition relation is undecidable, DSV uses over-approximation by preventing false positives (i.e., the existence of an incorrectly disassembled reachable instruction but deemed unreachable) and allowing, but minimizing, false negatives. We apply DSV to 102 binaries of GNU Coreutils with eight different state-of-the-art disassemblers from academia and industry. DSV is able to find soundness issues in the output of all disassemblers.
The dissertation's third contribution is WinCheck: a concolic model checker that detects memory-related properties of closed-source binaries. Bugs related to memory accesses are still a major issue for security vulnerabilities. Even a single buffer overflow or use-after-free in a large program may be the cause of a software crash, a data leak, or a hijacking of the control flow. Typical static formal verification tools aim to detect these issues at the source code level. WinCheck is a model-checker that is directly applicable to closed-source and stripped Windows executables. A key characteristic of WinCheck is that it performs its execution as symbolically as possible while leaving any information related to pointers concrete.
This produces a model checker tailored to pointer-related properties, such as buffer overflows, use-after-free, null-pointer dereferences, and reading from uninitialized memory. The technique thus provides a novel trade-off between ease of use, accuracy, applicability, and scalability. We apply WinCheck to ten closed-source binaries available in a Windows 10 distribution, as well as the Windows version of the entire Coreutils library. We conclude that the approach taken is precise -- provides only a few false negatives -- but may not explore the entire state space due to unresolved indirect jumps. / Doctor of Philosophy / Binary verification is a process that verifies a class of properties, usually security-related properties, on binary files, and does not need access to source code.
Since a binary file is composed of byte sequences and is not human-readable, in the binary verification process, a number of assumptions are usually made. The assumptions often involve the error-free nature of a set of subsystems used in the verification process and constitute the verification process's trusted computing base (or TCB). The reliability of the verification process therefore depends on how reliable the TCB is. The dissertation presents three research contributions in this regard. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB.
The dissertation's first contribution presents a validation on OCaml-to-PVS translations -- commonly used to translate a computer architecture's instruction specifications to PVS, a language that allows mathematical specifications. To build up a reliable semantical model of assembly instructions, which is assumed to be in the TCB, it is necessary to validate the translation.
The dissertation's second contribution validates the soundness of the disassembly process, which translates a binary file to corresponding assembly instructions.
Since the disassembly process is generally assumed to be trustworthy in many binary verification works, the TCB of binary verification could be reduced by validating the soundness of the disassembly process.
With the reduced TCB, the dissertation introduces WinCheck, the dissertation's third and final contribution: a concolic model checker that validates pointer-related properties of closed-source Windows binaries. The pointer-related properties include absence of buffer overflow, absence of use-after-free, and absence of null-pointer dereference.
|
182 |
UNVEILING THE SHADOWS: A COGNITIVE APPROACH TO UNDERSTANDING SOCIAL INFLUENCE STRATEGIES FOR ESTABLISHING SOCIAL ORDER IN DARKNET MARKETSAndrei, Filippo 15 March 2024 (has links)
Darknet markets have emerged due to technological advancements, decreasing the likelihood of violence by facilitating remote purchasing interactions. However, the absence of traditional legal frameworks makes maintaining order in these illegal online markets challenging. Without a legitimate state to enforce property rights or quality standards, sustaining order becomes increasingly complex. Despite its illicit nature and the absence of a legitimate state to protect market transactions, the darknet market has proven to be a resilient environment where user satisfaction rivals that of traditional e-commerce platforms such as eBay. How is this possible? Howcan social order emerge in such a context? Existing studies have primarily approached the issue from neo-institutionalist and social network perspectives, examining the emergence of social order through informal institutions and repeated interactions. A notable gap remains in understanding the cognitive aspects shaping decision-making processes in these illicit markets. This dissertation aims to fill this gap by examining the role of social influence in establishing the social order of the market in the absence of legal safeguards from a socio-cognitive lean.
|
183 |
Enhancing SAT-based Formal Verification Methods using Global LearningArora, Rajat 25 May 2004 (has links)
With the advances in VLSI and System-On-Chip (SOC) technology, the complexity of hardware systems has increased manifold. Today, 70% of the design cost is spent in verifying these intricate systems. The two most widely used formal methods for design verification are Equivalence Checking and Model Checking. Equivalence Checking requires that the implementation circuit should be exactly equivalent to the specification circuit (golden model). In other words, for each possible input pattern, the implementation circuit should yield the same outputs as the specification circuit. Model checking, on the other hand, checks to see if the design holds certain properties, which in turn are indispensable for the proper functionality of the design. Complexities in both Equivalence Checking and Model Checking are exponential to the circuit size.
In this thesis, we firstly propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC) and Bounded Model Checking (BMC). The idea is to perform a low-cost preprocessing that will statically induce global signal relationships into the original CNF formula of the circuit under verification and hence reduce the complexity of the SAT instance. This efficient and effective preprocessing quickly builds up the implication graph for the circuit under verification, yielding a large set of logic implications composed of direct, indirect and extended backward implications. These two-node implications (spanning time-frame boundaries) are converted into two-literal clauses, and added to the original CNF database. The added clauses constrain the search space of the SAT-solver engine, and provide correlation among the different variables, which enhances the Boolean Constraint Propagation (BCP). Experimental results on large and difficult ISCAS'85, ISCAS'89 (full scan) and ITC'99 (full scan) CEC instances and ISCAS'89 BMC instances show that our approach is independent of the state-of-the-art SAT-solver used, and that the added clauses help to achieve more than an order of magnitude speedup over the conventional approach. Also, comparison with Hyper-Resolution [Bacchus 03] suggests that our technique is much more powerful, yielding non-trivial clauses that significantly simplify the SAT instance complexity.
Secondly, we propose a novel global learning technique that helps to identify highly non-trivial relationships among signals in the circuit netlist, thereby boosting the power of the existing implication engine. We call this new class of implications as 'extended forward implications', and show its effectiveness through additional untestable faults they help to identify.
Thirdly, we propose a suite of lemmas and theorems to formalize global learning. We show through implementation that these theorems help to significantly simplify a generic CNF formula (from Formal Verification, Artificial Intelligence etc.) by identifying the necessary assignments, equivalent signals, complementary signals and other non-trivial implication relationships among its variables. We further illustrate through experimental results that the CNF formula simplification obtained using our tool outshines the simplification obtained using other preprocessors. / Master of Science
|
184 |
Reachability Analysis of RTL Circuits Using k-Induction Bounded Model Checking and Test Vector CompactionRoy, Tonmoy 05 September 2017 (has links)
In the first half of this thesis, a novel approach for k-induction bounded model checking using signal domain constraints and property partitioning for proving unreachability of branches in Verilog RTL code is presented. To do this, it approach uses program slicing with respect to the variables of the property under test to generate small-sized SMT formulas that describe the change of variable values between consecutive cycles. Variable substitution is then used on these variables to generate the formula for the subsequent cycles without traversing the abstract syntax tree of the entire design. To reduce the approximation on the induction step, an addition of signal domain constraints is proposed. Moreover, we present the technique for splitting up the property in question to get a better model of the system. The later half of the thesis is concerned with presenting a technique for doing sequential vector compaction on test set generated during simulation based ATPG. Starting with a compaction framework for storing metadata and about the test vectors during generation, this work presented to methods for findind the solution of this compaction problem. The first of these two methods generate the optimum solution by converting the problem appropriate for an optimization solver. The latter method utilizes a heuristics based approach for solving the same problem which generates a comparable but sub-optimal solution while having magnitudes better time and computational efficiency. / Master of Science / Electronic circuits can be described with languages known a hardware description languages like Verilog. The first part of this thesis is concerned about automatically proving if parts of this code is actually useful or reachable when implemented on and actual circuit. The thesis builds up on a method known as bounded model checking which can automatically prove if a property holds or not for a given system. The key insight is obtained from the fact that various memory elements in a circuit is allowed to be only in a certain range of values during the design process. The later half of this thesis is gear towards generating minimum sized inputs values to a circuit required for testing it. This work uses large sized input values to circuits generated by a previously published tool and proposes a way to make them smaller. This can reduce cost immensely for testing circuits in the industry where even the smallest increase in testing time increases cost of development immensely. There are two such approaches presented, one of which gives the optimum result but takes a long time to run for larger circuits, while the other gives comparable but sub-optimal result in a much more time efficient manner.
|
185 |
Décomposition d’image par modèles variationnels : débruitage et extraction de texture / Variational models for image decomposition : denoising and texture extractionPiffet, Loïc 23 November 2010 (has links)
Cette thèse est consacrée dans un premier temps à l’élaboration d’un modèle variationnel dedébruitage d’ordre deux, faisant intervenir l’espace BV 2 des fonctions à hessien borné. Nous nous inspirons ici directement du célèbre modèle de Rudin, Osher et Fatemi (ROF), remplaçant la minimisation de la variation totale de la fonction par la minimisation de la variation totale seconde, c’est à dire la variation totale de ses dérivées. Le but est ici d’obtenir un modèle aussi performant que le modèle ROF, permettant de plus de résoudre le problème de l’effet staircasing que celui-ci engendre. Le modèle que nous étudions ici semble efficace, entraînant toutefois l’apparition d’un léger effet de flou. C’est afin de réduire cet effet que nous introduisons finalement un modèle mixte, permettant d’obtenir des solutions à la fois non constantes par morceaux et sans effet de flou au niveau des détails. Dans une seconde partie, nous nous intéressons au problème d’extraction de texture. Un modèle reconnu comme étant l’un des plus performants est le modèle T V -L1, qui consiste simplement à remplacer dans le modèle ROF la norme L2 du terme d’attache aux données par la norme L1. Nous proposons ici une méthode originale permettant de résoudre ce problème utilisant des méthodes de Lagrangien augmenté. Pour les mêmes raisons que dans le cas du débruitage, nous introduisons également le modèle T V 2-L1, consistant encore une fois à remplacer la variation totale par la variation totale seconde. Un modèle d’extraction de texture mixte est enfin très brièvement introduit. Ce manuscrit est ponctué d’un vaste chapitre dédié aux tests numériques. / This thesis is devoted in a first part to the elaboration of a second order variational modelfor image denoising, using the BV 2 space of bounded hessian functions. We here take a leaf out of the well known Rudin, Osher and Fatemi (ROF) model, where we replace the minimization of the total variation of the function with the minimization of the second order total variation of the function, that is to say the total variation of its partial derivatives. The goal is to get a competitive model with no staircasing effect that generates the ROF model anymore. The model we study seems to be efficient, but generates a blurry effect. In order to deal with it, we introduce a mixed model that permits to get solutions with no staircasing and without blurry effect on details. In a second part, we take an interset to the texture extraction problem. A model known as one of the most efficient is the T V -L1 model. It just consits in replacing the L2 norm of the fitting data term with the L1 norm.We propose here an original way to solve this problem by the use of augmented Lagrangian methods. For the same reason than for the denoising case, we also take an interest to the T V 2-L1 model, replacing again the total variation of the function by the second order total variation. A mixed model for texture extraction is finally briefly introduced. This manuscript ends with a huge chapter of numerical tests.
|
186 |
Synthèse d'observateurs ensemblistes pour l’estimation d’état basées sur la caractérisation explicite des bornes d’erreur d’estimation / Set-membership state observers design based on explicit characterizations of theestimation-error boundsLoukkas, Nassim 06 June 2018 (has links)
Dans ce travail, nous proposons deux nouvelles approches ensemblistes pourl’estimation d’état basées sur la caractérisation explicite des bornes d’erreur d’estimation. Ces approches peuvent être vues comme la combinaison entre un observateur ponctuel et une caractérisation ensembliste de l’erreur d’estimation. L’objectif est de réduire la complexité de leur implémentation, de réduire le temps de calcul en temps réel et d’améliorer la précision et des encadrements des vecteurs d’état.La première approche propose un observateur ensembliste basé sur des ensembles invariants ellipsoïdaux pour des systèmes linéaires à temps-discret et aussi des systèmes à paramètres variables. L’approche proposée fournit un intervalle d’état déterministe qui est construit comme une somme entre le vecteur état estimé du système et les bornes de l’erreur d’estimation. L’avantage de cette approche est qu’elle ne nécessite pas la propagation des ensemble d’état dans le temps.La deuxième approche est une version intervalle de l’observateur d’état de Luenberger, pour les systèmes linéaires incertains à temps-discret, basés sur le calcul d’intervalle et les ensembles invariants. Ici, le problème d’estimation ensembliste est considéré comme un problème d’estimation d’état ponctuel couplé à une caractérisation intervalle de l’erreur d’estimation. / In This work, we propose two main new approaches for the set-membershipstate estimation problem based on explicit characterization of the estimation error bounds. These approaches can be seen as a combination between a punctual observer and a setmembership characterization of the observation error. The objective is to reduce the complexity of the on-line implimentation, reduce the on-line computation time and improve the accuracy of the estimated state enclosure.The first approach is a set-membership observer based on ellipsoidal invariant sets for linear discrete-time systems and also for Linear Parameter Varying systems. The proposed approach provides a deterministic state interval that is build as the sum of the estimated system states and its corresponding estimation error bounds. The important feature of the proposed approach is that does not require propagation of sets.The second approach is an interval version of the Luenberger state observer for uncertain discrete-time linear systems based on interval and invariant set computation. The setmembership state estimation problem is considered as a punctual state estimation issue coupled with an interval characterization of the estimation error.
|
187 |
Analysera eller gå på magkänsla? : Hur svenska chefer använder analys och intuition i sina beslut under Coronakrisen / Analyse or follow your gut? : How Swedish managers use analysis and intuition in their decision making during the Covid-19 crisisAhmadi Jah, Robert Roham, Chatten, Daniel, Sabah Ali, Hesen January 2021 (has links)
En kris såsom Coronapandemin är en extrem situation som skiljer sig från normala förhållanden och kräver att rätt beslut tas. Det sätter press på chefen i en organisation att fatta ett beslut som många gånger är improviserat, dels på grund av tidspress och stress, dels på grund av att varje kris är unik där det är otydligare vad som är rätt och fel beslut. Det beslut som chefen tar under en kris kan många gånger skilja sig från hur beslutet hade tagits under en normal situation. Bör chefen göra mer analyser före beslutet tas eftersom krisen är så pass komplex eller bör chefen i stället förlita sig mer på sin magkänsla eftersom krisens komplexitet är alltför omfattande att göra en analys av? Det är en fråga som har fått mycket uppmärksamhet inom beslutsforskning, inte minst under extrema situationer och kriser såsom en pandemi. Syftet med denna studie är att öka förståelsen för hur chefer hanterar det improviserande beslutsfattandet som uppstår under en kris. I studien sätts analytiska beslut i kontrast till beslut baserade på intuition eller magkänsla, men öppnar samtidigt upp för en möjlighet att båda kan kombineras. Intervjuer har gjorts med chefer från olika branscher runtom i Sverige för att öka förståelsen för krisbeslut under Coronapandemin. Studien visar att de flesta chefer använder analys eller kombinerar analys med intuition. Endast ett fåtal chefer tenderar att enbart använda intuition. Vidare framkommer det att hur chefen betraktar krisen får en effekt på vilka beslut som tas. Betraktas pandemin enbart som ett hot väljer chefen att fokusera på interna aktiviteter som ämnar lindra pandemins negativa påverkan i organisationen och stödja medarbetarna. Väljer chefen att även betrakta pandemin som en möjlighet så öppnar det upp för externa aktiviteter som kan dra nytta av pandemin, såsom att expandera verksamheten till och bredda kontaktnätverken för nya affärsmöjligheter. I de allra flesta beslut framkommer det att de baseras på ett nära samspel och kommunikation med andra aktörer. Det är sällan som ett beslut tas utan någon som helst kommunikation med någon annan. Denna kommunikation tycks ha motarbetat de negativa effekter som olika biaser medför i besluten. Exempelvis är cheferna mindre partiska när andras perspektiv tas med i beaktning före ett beslut tas. Slutligen tror de flesta cheferna att denna pandemi har gjort dem till en bättre beslutsfattare och vissa tror att tidigare stressfulla situationer och kriser har varit till stor hjälp även under Coronapandemin. / A crisis such as the Covid-19 pandemic is an extreme situation that differs from day-to-day situations and require that the right decisions be made. Such extreme situations put pressure on managers in organizations to make decisions that many times are improvised, in part because of time pressure and stress, and in part because each crisis is unique and makes it harder to know what the right decision is. The decisions managers make during a crisis are often different from how those decisions would have been made during a normal situation. Should the manager analyse the situation before the decision is made because the crisis is so complex, or should the manager instead follow his or her gut feeling because the crisis’ complexity is too overwhelming to possibly analyse? Such a question has received much attention in research of decision making, not least under extreme situations and crisis such as a pandemic. The purpose of this study is to increase the understanding of how managers deal with the improvised decision making that occur during a crisis. This study contrasts analytical decisions to intuitive decisions, while at the same time opens for the possibility that both styles of decision making could be combined. Interviews have been made with managers from different industries throughout Sweden to increase the understanding of crisis decision making during the Covid-19 pandemic. The study shows that most managers use analysis or combine analysis with intuition. Few managers tend to use intuition only. Furthermore, this study shows that the way the manager views the crisis can affect the decisions that he or she makes. If the manager views the merely as a threat, he or she will tend to focus on internal activities aimed at reducing the negative effects caused by the pandemic on the organisation and their members. If the manager chooses also to view the pandemic as an opportunity, it can lead to external activities that can take advantage of the pandemic, by for example expanding their business and business network. The study shows that most decisions have been made through communication and interplay with other actors. Only few decisions have been made without any communication or interplay whatsoever. The fact that most decisions have been made through communication with others seem to have reduced the effect of different biases. Managers have become less partial when other people’s perspectives have been included in the decisions. Finally, most managers believe that this pandemic has made them a better decision maker, and some believe that prior stressful situations and crisis have greatly assisted them during this pandemic.
|
188 |
Uncommon knowledgeLederman, Harvey January 2014 (has links)
This dissertation collects four papers on common knowledge and one on introspection principles in epistemic game theory. The first two papers offer a sustained argument against the importance of common knowledge and belief in explaining social behavior. Chapters 3 and 4 study the role of common knowledge of tautologies in standard models in epistemic logic and game theory. The first considers the problem as it relates to Robert Aumann’s Agreement Theorem; the second (joint work with Peter Fritz) studies it in models of awareness. The fifth paper corrects a claimed Agreement Theorem of Geanakoplos (1989), and exploits the corrected theorem to provide epistemic conditions for correlated equilibrium and Nash equilibrium.
|
189 |
l'évaluation de requêtes avec un délai constantKazana, Wojciech 16 September 2013 (has links) (PDF)
Cette thèse se concentre autour du problème de l'évaluation des requêtes. Étant donné une requête q et une base de données D, l'objectif est de calculer l'ensemble q(D) des nuplets résultant de l'évaluation de q sur D. Toutefois, l'ensemble q(D) peut être plus grand que la base de données elle-même car elle peut avoir une taille de la forme n^l où n est la taille de la base de données et l est l'arité de la requête. Calculer entièrement q(D) peut donc nécessiter plus que les ressources disponibles. L'objectif principal de cette thèse est une solution particulière à ce problème: une énumération de q(D) avec un délai constant. Intuitivement, cela signifie qu'il existe un algorithme avec deux phases: une phase de pré-traitement qui fonctionne en temps linéaire dans la taille de la base de données, suivie d'une phase d'énumération produisant un à un tous les éléments de q(D) avec un délai constant (indépendant de la taille de la base de données) entre deux éléments consécutifs. En outre, quatre autres problèmes sont considérés: le model-checking (où la requête q est un booléen), le comptage (où on veut calculer la taille |q(D)|), les tests (où on s'intéresse à un test efficace pour savoir si un uplet donné appartient au résultat de la requête) et la j-ième solution (où on veut accéder directement au j-ième élément de q(D)). Les résultats présentés dans cette thèse portent sur les problèmes ci-dessus concernant: - les requêtes du premier ordre sur les classes de structures de degré borné, - les requêtes du second ordre monadique sur les classes de structures de largeur d'arborescente bornée, - les requêtes du premier ordre sur les classes de structures avec expansion bornée.
|
190 |
Le théorème de lebesgue sur la dérivabilité des fonctions à variation bornéeMombo Mingandza, Patrick Landry 01 1900 (has links)
Dans ce mémoire, nous traiterons du théorème de Lebesgue, un des plus frappants
et des plus importants de l'analyse mathématique ; à savoir qu'une fonction
à variation bornée est dérivable presque partout. Le but de ce travail est de fournir,
à part la démonstration souvent proposée dans les cours de la théorie de la
mesure, d'autres démonstrations élaborées avec des outils mathématiques plus
simples. Ma contribution a consisté essentiellement à détailler et à compléter ces
démonstrations, puis à inclure la plupart des figures pour une meilleure lisibilité.
Nous allons maintenant, pour ce théorème qui se présente sous d'autres variantes,
en proposer l'historique et trois démonstrations différentes. / In this dissertation, we will be handling a theorem of Lebesgue, one of the
most stricking and ultimate of mathematical analysis ; namely a function with
bounded variation has a derivative almost everywhere. The aim of our research is
to provide, apart from the proof usually offered in measure theory courses, other
demontrations achieved with more simple mathematical tools. My contribution
was primarily to simplify and to complete these demonstrations, to include the
most of the drawings in order to visualize what is being said. For this theorem,
which has other presentations, we will give now the history and three different
demonstrations.
|
Page generated in 0.0477 seconds