• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 12
  • 9
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Cyber-Physical Analysis and Hardening of Robotic Aerial Vehicle Controllers

Taegyu Kim (10716420) 06 May 2021 (has links)
Robotic aerial vehicles (RAVs) have been increasingly deployed in various areas (e.g., commercial, military, scientific, and entertainment). However, RAVs’ security and safety issues could not only arise from either of the “cyber” domain (e.g., control software) and “physical” domain (e.g., vehicle control model) but also stem in their interplay. Unfortunately, existing work had focused mainly on either the “cyber-centric” or “control-centric” approaches. However, such a single-domain focus could overlook the security threats caused by the interplay between the cyber and physical domains. <br>In this thesis, we present cyber-physical analysis and hardening to secure RAV controllers. Through a combination of program analysis and vehicle control modeling, we first developed novel techniques to (1) connect both cyber and physical domains and then (2) analyze individual domains and their interplay. Specifically, we describe how to detect bugs after RAV accidents using provenance (Mayday), how to proactively find bugs using fuzzing (RVFuzzer), and how to patch vulnerable firmware using binary patching (DisPatch). As a result, we have found 91 new bugs in modern RAV control programs, and their developers confirmed 32 cases and patch 11 cases.
12

Revamping Binary Analysis with Sampling and Probabilistic Inference

Zhuo Zhang (16398420) 19 June 2023 (has links)
<p>Binary analysis, a cornerstone technique in cybersecurity, enables the examination of binary executables, irrespective of source code availability.</p> <p>It plays a critical role in understanding program behaviors, detecting software bugs, and mitigating potential vulnerabilities, specially in situations where the source code remains out of reach.</p> <p>However, aligning the efficacy of binary analysis with that of source-level analysis remains a significant challenge, primarily due to the uncertainty caused by the loss of semantic information during the compilation process.</p> <p><br></p> <p>This dissertation presents an innovative probabilistic approach, termed as <em>probabilistic binary analysis</em>, designed to combat the intrinsic uncertainty in binary analysis.</p> <p>It builds on the fundamental principles of program sampling and probabilistic inference, enhanced further by an iterative refinement architecture.</p> <p>The dissertation suggests that a thorough and practical method of sampling program behaviors can yield a substantial quantity of hints which could be instrumental in recovering lost information, despite the potential inclusion of some inaccuracies.</p> <p>Consequently, a probabilistic inference technique is applied to systematically incorporate and process the collected hints, suppressing the incorrect ones, thereby enabling the interpretation of high-level semantics.</p> <p>Furthermore, an iterative refinement mechanism is deployed to augment the efficiency of the probabilistic analysis in subsequent applications, facilitating the progressive enhancement of analysis outcomes through an automated or human-guided feedback loop.</p> <p><br></p> <p>This work offers an in-depth understanding of the challenges and solutions related to assessing low-level program representations and systematically handling the inherent uncertainty in binary analysis. </p> <p>It aims to contribute to the field by advancing the development of precise, reliable, and interpretable binary analysis solutions, thereby setting the groundwork for future exploration in this domain.</p>
13

Low-Level Static Analysis for Memory Usage and Control Flow Recovery

Bockenek, Joshua Alexander 07 March 2023 (has links)
Formal characterization of the memory used by a program is an important basis for security analyses, compositional verification, and identification of noninterference. However, soundly proving memory usage requires operating on the assembly level due to the semantic gap between high-level languages and the code that processors actually execute. Automated methods, such as model checking, would not be able to handle many interesting functions due to the undecidability of memory usage. Fully-interactive methods do not scale well either. Sound control flow recovery (CFR) is also important for binary decompilation, verification, patching, and security analysis. It lifts raw unstructured data into a form that allows reasoning over behavior and semantics. However, doing so requires interpreting the behavior of the program when indirect or dynamic control flow exists, creating a recursive dependency. This dissertation tackles the first property with two contributions that perform proof generation combined with interactive theorem proving in a semi-automated manner: an untrusted tool extracts as much information as it can from the functions under test and then generates all the necessary proofs to be completed in a theorem prover. The first, Floyd-style approach still requires significant manual effort but provides good flexibility and ensures no paths are analyzed more than once. In contrast, the second, Hoare-style approach sacrifices some flexibility and avoidance of repeated path evaluation in order to achieve much greater automation. However, neither approach can handle the dynamic control flow caused by indirect branching. The second property is handled by the second set of contributions of this dissertation. These two contributions provide fully-automated methods of recovering control flow from binaries even in the presence of indirect branching. When such dynamic control flow cannot be overapproximatively resolved, it is clearly noted in the resultant output. In the first approach to control flow recovery, a structured memory representation allows for general analysis of control flow in the presence of indirection, gaining scalability by utilizing context-free function analysis. It supports various aliasing conditions via the usage of nondeterminism, with multiple output states potentially being produced from a given input state. The second approach adds function context and abstract interpretation-inspired modeling of the C++ exception handling (EH) application binary interface (ABI), allowing for the discovery of previously-unknown paths while maintaining or increasing automation. / Doctor of Philosophy / Modern computer programs are so complicated that individual humans cannot manually check all but the smallest programs to make sure they are correct and secure. This is even worse if you want to reduce the trusted computing base (TCB), the stuff that you have to assume is working right in order to say a program will execute correctly. The TCB includes your computer itself, but also whatever tools were used to take the programs written by programmers and transform them into a form suitable for running on a computer. Such tools are often called compilers. One method of reducing the TCB is to examine the lowest-level representation of that program, the assembly or even machine code that is actually run by your computer. This poses unique challenges, because operating on such a low level means you do not have a lot of the structure that a more abstract, higher-level representation provides. Also, sometimes you want to formally state things about a program's behavior; that is, say things about what it does with a high degree of confidence based on mathematical principles. You may also want to verify that one or more of those statements are true. If you want to be detailed about that behavior, you may need to know all of the chunks, or regions, in random-access memory (RAM) that are used by that program. RAM, henceforth referred to as just ``memory'', is your computer's first place of storage for the information used by running programs. This is distinct from long-term storage devices like hard disk drives (HDDs) or solid-state drives (SSDs), which programs do not normally have direct access to. Unfortunately, there is no one single approach that can automatically determine with absolute certainty for all cases the exact regions of memory that are read or written. This is called undecidability, and means that you need to approximate those memory regions a lot of the time if you want to have a significant degree of automation. An underapproximation, an approach that only gives you some of the regions, is not useful for formal statements as it might miss out on some behavior; it is unsound. This means that you need an overapproximation, an approach that is guaranteed to give you at least the regions read or written. Therefore, the first contribution of this dissertation is a preliminary approach to such an overapproximation. This approach is based on the work of Robert L. Floyd, focusing on the direct control flow (where the steps of a program go) in an individual function (structured program component). It still requires a lot of user effort, including having to manually specify the regions in memory that were possibly used and do a lot of work to prove that those regions are (overapproximatively) correct, so our tests were limited in scope. The second contribution automated a lot of the manual work done for the first approach. It is based on the work of Charles Antony Richard Hoare, who developed a verification approach focusing on the syntax (the textual form) of programs. This contribution produces what we call formal memory usage certificates (FMUCs), which are formal statements that the regions of memory they describe are the only ones possibly affected by the functions under test. These statements also come with proofs, which for our work are like scripts used to verify that the things the FMUCs assert about the corresponding functions can be shown to be true given the assumptions our FMUCs have. Sometimes those proofs are incomplete, though, such as when there is a loop (repeated bit of code) in a function under test or one function calls (executes) another. In those cases, a user has to finish the proof, in the first case by weakening (removing information from) the FMUC's statements about the loop and in the second by composing, or combining, the FMUCs of the two functions. Additionally, this second approach cannot handle dynamic control flow. Such control flow occurs when the low-level instructions a program uses to move to another place in that program do not have a pre-stored location to go to. Instead, that location is supplied as the program is running. This is opposed to direct control flow, where the place to go to is hard-coded into the program when it is compiled. The tool also cannot not deal with aliasing, which is when different state parts (value-holding components) of a program contain the same value and that value is used as the numeric address or identifier of a location in memory. Specifically, it cannot deal with potential aliasing, when there is not enough information available to determine if the state parts alias or not. Because of that, we had to add extra assumptions to the FMUCs that limited them to those cases where ambiguous memory-referencing state parts referred to separate memory locations. Finally, it specifically requires assembly as input; you cannot directly supply a binary to it. This is also true of the first contribution. Because of this, we were able to test on more functions than before, but not a lot more. Not being able to deal with dynamic control flow is a big problem, as almost all programs use it. For example, when a function reaches its end, it has to figure out where to return to based on the current state of the program (in the previous contribution, this was done manually). This means that control flow recovery (CFR) is very important for many applications, including decompilation (converting a program back into a higher-level form), patching (updating a program in place without modifying the original code and recompiling it), and low-level analysis or verification in general. However, as you may have noticed from earlier in this paragraph, in order to deal with such dynamic control flow you need to figure out what the possible destinations are for the individual control flow transfers. That can require knowing where you came from in the program, which means that analysis of dynamic control flow requires context (in this context, information previously obtained in the program). Even worse, it is another undecidable problem that requires overapproximation. To soundly recover control flow, we developed Hoare graphs (HGs), the third contribution of this dissertation. HGs use memory models that take the form of forests, or collections of tree data structures. A single tree represents a region in memory that may have multiple symbolic references, or abstract representations of a value. The children of the tree represent regions used in the program that are enclosed within their parent tree elements. Now, instead of assuming that all ambiguous memory regions are separate, we can use them under various aliasing conditions. We have also implemented support for some forms of dynamic control flow. Those that are not supported are clearly marked in the resultant HG. No user interaction is required even when loops are present thanks to a methodology that automatically reduces the amount of information present at a re-executed instruction until the information stabilizes. Function composition is also automatic now thanks to a method that treats each function as its own context in a safe and automated way, reducing memory consumption of our tool and allowing larger programs to be examined. In the process we did lose the ability to deal with recursion (functions that call themselves or call other functions that call back to the original), though. Lastly, we provided the ability to directly load binaries into the tool, no external disassembly (converting machine code into human-readable instructions) needed. This all allowed much greater testing than before, with applications to multiple programs and program libraries. The fourth and final contribution of this dissertation iterates on the HG work by narrowing focus to the concept of exceptional control flow. Specifically, it models the kind of exception handling used by C++ programs. This is important as, if you want to explore a program's behavior, you need to know all the places it goes to. If you use a tool that does not model exception handling, you may end up missing paths of execution caused by unwinding. This is when an exception is thrown and propagates up through the program's current stack of function calls, potentially reaching programmer-supplied handling for that exception. Despite this, commonplace tools for static, low-level program analysis do not model such unwinding. The control flow graph (CFG) produced by our exception-aware tool are called exceptional interprocedural control flow graphs (EICFGs). These provide information about the exceptions being thrown and what paths they take in the program when they are thrown. Additional improvements are a better methodology for handling dynamic control flow as well adding back in support for recursion. All told, this allowed us to explore even more programs than ever before.
14

Analyse de codes auto-modifiants pour la sécurité logicielle / Self-modifying code analysis for software security

Reynaud, Daniel 15 October 2010 (has links)
Les programmes auto-modifiants fonctionnent de manière singulière car ils sont capables de réécrire leur propre code en cours d'exécution. Absents des modèles de calcul théoriques, ils sont pourtant omniprésents dans les ordinateurs et les systèmes d'exploitations actuels. Ils sont en effet utilisés par les chargeurs d'amorçages, pour la compilation à la volée ou encore l'optimisation dynamique de code. Ils sont également omniprésents dans les programmes malveillants, dont les auteurs ont bien compris qu'ils constituaient des objets complexes à analyser. Ils sont également virtuellement présents dans tous les autres programmes mais de manière non-intentionnelle. En effet, on peut voir certaines classes de vulnérabilités, par exemple les failles par débordement de tampon, comme la possibilité d'exécuter accidentellement des données -- ce qui est un comportement caractéristique des programmes auto-modifiants.Au cours de cette thèse, nous avons proposé un modèle théorique permettant de caractériser un certain nombre de comportements auto-modifiants avancés. Nous avons également mis au point un prototype, TraceSurfer, permettant de détecter efficacement ces comportements à partir de l'analyse de traces et de les visualiser sous forme de graphes d'auto-référence. Enfin, nous avons validé par l'expérience à la fois le modèle théorique et l'outil en les testant sur un grand nombre de programmes malveillants / Self-modifying programs run in a very specific way: they are capable to rewrite their own code at runtime. Remarkably absent from theoretical computation models, they are present in every modern computer and operating system. Indeed, they are used by bootloaders, for just-in-time compilation or dynamic optimizations. They are also massively used by malware authors in order to bypass antivirus signatures and to delay analysis. Finally, they are unintentionally present in every program, since we can model code injection vulnerabilities (such as buffer overflows) as the ability for a program to accidentally execute data.In this thesis, we propose a formal framework in order to characterize advanced self-modifying behaviors and code armoring techniques. A prototype, TraceSurfer, allows us to detect these behaviors by using fine-grained execution traces and to visualize them as self-reference graphs. Finally, we assess the performance and efficiency of the tool by running it on a large corpus of malware samples
15

Scaling Software Security Analysis to Millions of Malicious Programs and Billions of Lines of Code

Jang, Jiyong 01 August 2013 (has links)
Software security is a big data problem. The volume of new software artifacts created far outpaces the current capacity of software analysis. This gap has brought an urgent challenge to our security community—scalability. If our techniques cannot cope with an ever increasing volume of software, we will always be one step behind attackers. Thus developing scalable analysis to bridge the gap is essential. In this dissertation, we argue that automatic code reuse detection enables an efficient data reduction of a high volume of incoming malware for downstream analysis and enhances software security by efficiently finding known vulnerabilities across large code bases. In order to demonstrate the benefits of automatic software similarity detection, we discuss two representative problems that are remedied by scalable analysis: malware triage and unpatched code clone detection. First, we tackle the onslaught of malware. Although over one million new malware are reported each day, existing research shows that most malware are not written from scratch; instead, they are automatically generated variants of existing malware. When groups of highly similar variants are clustered together, new malware more easily stands out. Unfortunately, current systems struggle with handling this high volume of malware. We scale clustering using feature hashing and perform semantic analysis using co-clustering. Our evaluation demonstrates that these techniques are an order of magnitude faster than previous systems and automatically discover highly correlated features and malware groups. Furthermore, we design algorithms to infer evolutionary relationships among malware, which helps analysts understand trends over time and make informed decisions about which malware to analyze first. Second, we address the problem of detecting unpatched code clones at scale. When buggy code gets copied from project to project, eventually all projects will need to be patched. We call clones of buggy code that have been fixed in only a subset of projects unpatched code clones. Unfortunately, code copying is usually ad-hoc and is often not tracked, which makes it challenging to identify all unpatched vulnerabilities in code basesat the scale of entire OS distributions. We scale unpatched code clone detection to spot over15,000 latent security vulnerabilities in 2.1 billion lines of code from the Linux kernel, allDebian and Ubuntu packages, and all C/C++ projects in SourceForge in three hours on asingle machine. To the best of our knowledge, this is the largest set of bugs ever reported in a single paper.
16

Semantic monitoring mechanisms dedicated to security monitoring in IaaS cloud / Mécanismes de monitoring sémantique dédiés à la sécurité des infrastructures cloud IaaS

Hebbal, Yacine 18 September 2017 (has links)
L’introspection de machine virtuelle (VM) consiste à superviser les états et les activités de celles-ci depuis la couche de virtualisation, tirant ainsi avantage de son emplacement qui offre à la fois une bonne visibilité des états et des activités des VMs ainsi qu’une bonne isolation de ces dernières. Cependant, les états et les activités des VMs à superviser sont vus par la couche de virtualisation comme une suite binaire de bits et d’octets en plus des états des ressources virtuelles. L’écart entre la vue brute disponible à la couche de virtualisation et celle nécessaire pour la supervision de sécurité des VMs constitue un challenge pour l’introspection appelé « le fossé sémantique ». Pour obtenir des informations sémantiques sur les états et les activités des VMs à fin de superviser leur sécurité, nous présentons dans cette thèse un ensemble de techniques basé sur l’analyse binaire et la réutilisation du code binaire du noyau d’une VM. Ces techniques permettent d’identifier les adresses et les noms de la plupart des fonctions noyau d’une VM puis de les instrumenter (intercepter, appeler et analyser) pour franchir le fossé sémantique de manière automatique et efficiente même dans les cas des optimisations du compilateur et de la randomisation de l’emplacement du code noyau dans la mémoire de la VM. / Virtual Machine Introspection (VMI) consists inmonitoring VMs security from the hypervisor layer which offers thanks to its location a strong visibility on their activities in addition to a strong isolation from them. However, hypervisor view of VMs is just raw bits and bytes in addition to hardware states. The semantic difference between this raw view and the one needed for VM security monitoring presents a significant challenge for VMI called “the semantic gap”. In order to obtain semantic information about VM states and activities for monitoring their security from the hypervisor layer, we present in this thesis a set of techniques based on analysis and reuse of VM kernel binary code. These techniques enable to identify addresses and names of most VM kernel functions then instrument (call, intercept and analyze) them to automatically bridge the semantic gap regardless of challenges presented by compiler optimizations and kernel base address randomization.
17

Proof-producing resolution of indirect jumps in the binary intermediate representation BIR / Bevis-producerande bestämning av indirekta hopp i den binära mellanliggande representationen BIR

Westerberg, Adrian January 2021 (has links)
HolBA is a binary analysis library that can be used to formally verify binary programs using contracts. It is developed in the interactive theorem prover HOL4 to achieve a high degree of trust in verification, the result of verification is a machine-checked proof demonstrating its correctness. This thesis presents two proof-producing procedures. The first resolve indirect jumps in BIR, the binary intermediate language used in HolBA, given their possible targets. The second transfers contracts proved on resolved BIR programs without indirect jumps to the original ones containing indirect jumps. This allows the existing weakest precondition generator to automatically prove contracts on loop-free BIR fragments containing indirect jumps. The implemented proof-producing procedures were evaluated on a small binary program and generated synthetic BIR programs. It was found that the first proof-producing procedure is not very efficient, which could pose a problem when verifying large binary programs. Future work could include improving the efficiency of the first proof-producing procedure and integrate it with an external tool that automatically finds possible targets of indirect jumps. / HolBA är ett bibliotek för binär analys som kan användas för att formellt verifiera binära program med kontrakt. Det är utvecklat i den interaktiva teorembevisaren HOL4 för att åstadkomma en hög grad av tillit till verifiering, resultatet av verifiering är ett maskin-kontrollerat bevis som demonstrerar dess korrekthet. Detta arbete presenterar två bevis-producerande procedurer. Den första bestämmer indirekta hopp i BIR, den binära mellanliggande representationen som används i HolBA, givet deras möjliga mål. Den andra överför kontrakt bevisade för bestämda BIR program utan indirekta hopp till originalen med indirekta hopp. Detta möjliggör den existerande svagaste förutsättning generatorn att automatiskt bevisa kontrakt för sling-fria BIR fragment som innehåller indirekta hopp. De implementerade bevis-producerande procedurerna utvärderades med ett litet binärt program och med genererade syntetiska BIR program. Det visades att den första bevis-producerande proceduren inte är särskilt effektiv, vilket skulle kunna vara ett problem vid verifiering av stora binära program. Framtida arbete skulle kunna inkludera att förbättra effektiviteten för den första bevis-producerande proceduren och att integrera den med ett externt verktyg som automatiskt kan hitta de möjliga målen för indirekta hopp.
18

Memory Management Error Detection in Parallel Software using a Simulated Hardware Platform

Sinha, Udayan Prabir January 2017 (has links)
Memory management errors in concurrent software running on multi-core architectures can be difficult and costly to detect and repair. Examples of errors are usage of uninitialized memory, memory leaks, and data corruptions due to unintended overwrites of data that are not owned by the writing entity. If memory management errors could be detected at an early stage, for example when using a simulator before the software has been delivered and integrated in a product, significant savings could be achieved. This thesis investigates and develops methods for detection of usage of uninitialized memory in software that runs on a virtual hardware platform. The virtual hardware platform has models of Ericsson Radio Base Station hardware for baseband processing and digital radio processing. It is a bit-accurate representation of the underlying hardware, with models of processors and peripheral units, and it is used at Ericsson for software development and integration. There are tools available, such as Memcheck (Valgrind), and MemorySanitizer and AddressSanitizer (Clang), for memory management error detection. The features of such tools have been investigated, and memory management error detection algorithms were developed for a given processor’s instruction set. The error detection algorithms were implemented in a virtual platform, and issues and design considerations reflecting the application-specific instruction set architecture of the processor, were taken into account. A prototype implementation of memory error presentation with error locations mapped to the source code of the running program, and presentation of stack traces, was done, using functionality from a debugger. An experiment, using a purpose-built test program, was used to evaluate the error detection capability of the algorithms in the virtual platform, and for comparison with the error detection capability of Memcheck. The virtual platform implementation detects all known errors, except one, in the program and reports them to the user in an appropriate manner. There are false positives reported, mainly due to the limited awareness about the operating system used on the simulated processor / Minneshanteringsfel i parallell mjukvara som exekverar på flerkärniga arkitekturer kan vara svåra att detektera, samt kostsamma att åtgärda. Exempel på fel kan vara användning av ej initialiserat minne, minnesläckage, samt att data blir överskrivna av en process som inte är ägare till de data som skrivs över. Om minneshanteringsfel kan detekteras i ett tidigt skede, t ex genom att använda en simulator, som körs innan mjukvaran har levererats och integrerats i en produkt, skulle man kunna erhålla signifikanta kostnadsbesparingar. Detta examensarbete undersöker och utvecklar metoder för detektion av ej initialiserat minne i mjukvara som körs på en virtuell plattform. Den virtuella plattformen innehåller modeller av delar av den digitala hårdvara, för basband och radio, som finns i en Ericsson radiobasstation. Modellerna är bit-exakta representationer av motsvarande hårdvarublock, och innefattar processorer och periferienheter. Den virtuella plattformen används av Ericsson för utveckling och integration av mjukvara. Det finns verktyg, exempelvis Memcheck (Valgrind), samt MemorySanitizer och AddressSanitizer (Clang), som kan användas för att detektera minneshanteringsfel. Egenskaper hos sådana verktyg har undersökts, och algoritmer för detektion av minneshanteringsfel har utvecklats, för en specifik processor och dess instruktioner. Algoritmerna har implementerats i en virtuell plattform, och kravställningar och design-överväganden som speglar den tillämpnings-specifika instruktionsrepertoaren för den valda processorn, har behandlats. En prototyp-implementation av presentation av minneshanteringsfel, där källkodsraderna samt anropsstacken för de platser där fel har hittats pekas ut, har utvecklats, med användning av en debugger. Ett experiment, som använder sig av ett för ändamålet utvecklat program, har använts för att utvärdera feldetektions-förmågan för de algoritmer som implementerats i den virtuella plattformen, samt för att jämföra med feldetektions-förmågan hos Memcheck. De algoritmer som implementerats i den virtuella plattformen kan, för det program som används, detektera alla kända fel, förutom ett. Algoritmerna rapporterar också falska felindikeringar. Dessa rapporter är huvudsakligen ett resultat av att den aktuella implementationen har begränsad kunskap om det operativsystem som används på den simulerade processorn.
19

Asteroseismic inferences from red-giant stars

Themeẞl, Nathalie 28 September 2018 (has links)
No description available.

Page generated in 0.4551 seconds