Spelling suggestions: "subject:"fuzzing"" "subject:"buzzing""
21 |
Practical Methods for Fuzzing Real-World SystemsPrashast Srivastava (15353365) 27 April 2023 (has links)
<p>The current software ecosystem is exceptionally complex. A key defining feature of this complexity is the vast input space that software applications must process. This feature</p>
<p>inhibits fuzzing (an effective automated testing methodology) in uncovering deep bugs (i.e.,</p>
<p>bugs with complex preconditions). We improve the bug-finding capabilities of fuzzers by</p>
<p>reducing the input space that they have to explore. Our techniques incorporate domain</p>
<p>knowledge from the software under test. In this dissertation, we research how to incorporate</p>
<p>domain knowledge in different scenarios across a variety of software domains and test</p>
<p>objectives to perform deep bug discovery.</p>
<p>We start by focusing on language interpreters that form the backend of our web ecosystem.</p>
<p>Uncovering deep bugs in these interpreters requires synthesizing inputs that perform a</p>
<p>diverse set of semantic actions. To tackle this issue, we present Gramatron, a fuzzer that employs grammar automatons to speed up bug discovery. Then, we explore firmwares belonging to the rapidly growing IoT ecosystem which generally lack thorough testing. FirmFuzz infers the appropriate runtime state required to trigger vulnerabilities in these firmwares using the domain knowledge encoded in the user-facing network applications. Additionally, we showcase how our proposed strategy to incorporate domain knowledge is beneficial under alternative testing scenarios where a developer analyzes specific code locations, e.g., for patch testing. SieveFuzz leverages knowledge of targeted code locations to prohibit exploration of code regions and correspondingly parts of the input space that are irrelevant to reaching the target location. Finally, we move beyond the realm of memory-safety vulnerabilities and present how domain knowledge can be useful in uncovering logical bugs, specifically deserialization vulnerabilities in Java-based applications with Crystallizer. Crystallizer uses a hybrid analysis methodology to first infer an over-approximate set of possible payloads through static analysis (to constrain the search space). Then, it uses dynamic analysis to instantiate concrete payloads as a proof-of-concept of a deserialization vulnerability.</p>
<p>Throughout these four diverse areas we thoroughly demonstrate how incorporating domain</p>
<p>knowledge can massively improve bug finding capabilities. Our research has developed</p>
<p>tooling that not only outperforms the existing state-of-the-art in terms of efficient bug discovery (with speeds up to 117% faster), but has also uncovered 18 previously unknown bugs,</p>
<p>with five CVEs assigned.</p>
|
22 |
FUZZING DEEPER LOGIC WITH IMPEDING FUNCTION TRANSFORMATIONRowan Brock Hart (14205404) 02 December 2022 (has links)
<p>Fuzzing, a technique for negative testing of programs using randomly mutated or gen?erated input data, is responsible for the discovery of thousands of bugs in software from web browsers to video players. Advances in fuzzing focus on various methods for enhancing the number of bugs found and reducing the time spent to find them by applying various static, dynamic, and symbolic binary analysis techniques. As a stochastic process, fuzzing is an inherently inefficient method for discovering bugs residing in deep logic of programs due to the compounding complexity of preconditions as paths in programs grow in length. We propose a novel system to overcome this limitation by abstracting away path-constraining preconditions from a statement level to a function level by identifying impeding functions, functions that inhibit control flow from proceeding. REFACE is an end-to-end system for enhancing the capabilities of an existing fuzzer by generating variant binaries that present an easier-to-fuzz interface and expands an ongoing fuzzing campaign with minimal offline overhead. REFACE operates entirely on binary programs, requiring no source code or sym?bols to run, and is fuzzer-agnostic. This enhancement represents a step forward in a new direction toward abstraction of code that has historically presented a significant barrier to fuzzing and aims to make incremental progress by way of several ancillary dataflow analysis techniques with potential wide applicability. We attain a significant improvement in speed of obtaining maximum coverage, re-discover one known bug, and discover one possible new bug in a binary program during evaluation against an un-modified state-of-the-art fuzzer with no augmentation.</p>
|
23 |
Implementation and testing of a blackbox and a whitebox fuzzer for file compression routinesTobkin, Toby 01 May 2013 (has links)
Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program's source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any "thinking" about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library.
|
24 |
Torpedo: A Fuzzing Framework for Discovering Adversarial Container WorkloadsMcDonough, Kenton Robert 13 July 2021 (has links)
Over the last decade, container technology has fundamentally changed the landscape of commercial cloud computing services. In contrast to traditional VM technologies, containers theoretically provide the same process isolation guarantees with less overhead and additionally introduce finer grained options for resource allocation. Cloud providers have widely adopted container based architectures as the standard for multi-tenant hosting services and rely on underlying security guarantees to ensure that adversarial workloads cannot disrupt the activities of coresident containers on a given host. Unfortunately, recent work has shown that the isolation guarantees provided by containers are not absolute. Due to inconsistencies in the way cgroups have been added to the Linux kernel, there exist vulnerabilities that allow containerized processes to generate "out of band" workloads and negatively impact the performance of the entire host without being appropriately charged. Because of the relative complexity of the kernel, discovering these vulnerabilities through traditional static analysis tools may be very challenging. In this work, we present TORPEDO, a set of modifications to the SYZKALLER fuzzing framework that creates containerized workloads and searches for sequences of system calls that break process isolation boundaries. TORPEDO combines traditional code coverage feedback with resource utilization measurements to motivate the generation of "adversarial" programs based on user-defined criteria. Experiments conducted on the default docker runtime runC as well as the virtualized runtime gVisor independently reconfirm several known vulnerabilities and discover interesting new results and bugs, giving us a promising framework to conduct more research. / Master of Science / Over the last decade, container technology has fundamentally changed the landscape of commercial cloud computing services. By abstracting away many of the system details required to deploy software, developers can rapidly prototype, deploy, and take advantage of massive distributed frameworks when deploying new software products. These paradigms are supported with corresponding business models offered by cloud providers, who allocate space on powerful physical hardware among many potentially competing services. Unfortunately, recent work has shown that the isolation guarantees provided by containers are not absolute. Due to inconsistencies in the way containers have been implemented by the Linux kernel, there exist vulnerabilities that allow containerized programs to generate "out of band" workloads and negatively impact the performance of other containers. In general, these vulnerabilities are difficult to identify, but can be very severe. In this work, we present TORPEDO, a set of modifications to the SYZKALLER fuzzing framework that creates containerized workloads and searches for programs that negatively impact other containers. TORPEDO uses a novel technique that combines resource monitoring with code coverage approximations, and initial testing on common container software has revealed new interesting vulnerabilities and bugs.
|
25 |
Detection of web vulnerabilities via model inference assisted evolutionary fuzzing / Détection de vulnérabilités Web par frelatage (fuzzing) évolutionniste et inférence de modèleDuchene, Fabien 02 June 2014 (has links)
Le test est une approche efficace pour détecter des bogues d'implémentation ayant un impact sur la sécurité, c.a.d. des vulnérabilités. Lorsque le code source n'est pas disponible, il est nécessaire d'utiliser des techniques de test en boîte noire. Nous nous intéressons au problème de détection automatique d'une classe de vulnérabilités (Cross Site Scripting alias XSS) dans les applications web dans un contexte de test en boîte noire. Nous proposons une approche pour inférer des modèles de telles applications et frelatons des séquences d'entrées générées à partir de ces modèles et d'une grammaire d'attaque. Nous inférons des automates de contrôle et de teinte, dont nous extrayons des sous-modèles afin de réduire l'espace de recherche de l'étape de frelatage. Nous utilisons des algorithmes génétiques pour guider la production d'entrées malicieuses envoyées à l'application. Nous produisons un verdict de test grâce à une double inférence de teinte sur l'arbre d'analyse grammaticale d'un navigateur et à l'utilisation de motifs de vulnérabilités comportant des annotations de teinte. Nos implémentations LigRE et KameleonFuzz obtiennent de meilleurs résultats que les scanneurs boîte noire open-source. Nous avons découvert des XSS ``0-day'' (c.a.d. des vulnérabilités jusque lors inconnues publiquement) dans des applications web utilisées par des millions d'utilisateurs. / Testing is a viable approach for detecting implementation bugs which have a security impact, a.k.a. vulnerabilities. When the source code is not available, it is necessary to use black-box testing techniques. We address the problem of automatically detecting a certain class of vulnerabilities (Cross Site Scripting a.k.a. XSS) in web applications in a black-box test context. We propose an approach for inferring models of web applications and fuzzing from such models and an attack grammar. We infer control plus taint flow automata, from which we produce slices, which narrow the fuzzing search space. Genetic algorithms are then used to schedule the malicious inputs which are sent to the application. We incorporate a test verdict by performing a double taint inference on the browser parse tree and combining this with taint aware vulnerability patterns. Our implementations LigRE and KameleonFuzz outperform current open-source black-box scanners. We discovered 0-day XSS (i.e., previously unknown vulnerabilities) in web applications used by millions of users.
|
26 |
The Hare, the Tortoise and the Fox : Extending Anti-FuzzingDewitz, Anton, Olofsson, William January 2022 (has links)
Background. The goal of our master's thesis is to reduce the effectiveness of fuzzers using coverage accounting. The method we chose to carry out our goal is based on how the coverage accounting in TortoiseFuzz rates code paths to find memory corruption bugs. It simply looks for functions that tend to cause vulnerabilities and considers more to be better. Our approach is to insert extra function calls to these memory functions inside fake code paths generated by anti-fuzzing. Objectives. Our thesis researches the current anti-fuzzing techniques to figure out which tool to extend with our counter to coverage accounting. We conduct an experiment where we run several fuzzers on different benchmark programs to evaluate our tool. Methods. The foundation for the anti-fuzzing tool will be obtained by conducting a literature review, to evaluate current anti-fuzzing techniques, and how coverage accounting prioritizes code paths. Afterward, an experiment will be conducted to evaluate the created tool. To evaluate fuzzers the FuzzBench platform will be used, a homogeneous test environment that allows future research to easier compare to old research using a standard platform. Benchmarks representative of real-world applications will be chosen from within this platform. Each benchmark will be executed in three versions, the original, one protected by a prior anti-fuzzing tool, and one protected by our new anti-fuzzing tool. Results. This experiment showed that our anti-fuzzing tool successfully lowered the number of unique found bugs by TortoiseFuzz, even when the benchmark is protected by a prior developed anti-fuzzing tool. Conclusions. We can conclude, based on our results, that our tool shows promise against a fuzzer using coverage accounting. Further study will push fuzzers to become even better to overcome new anti-fuzzing methods. / Bakgrund. Målet med vår masteruppsats är att försöka reducera effektiviteten hos fuzzers som använder sig av täckningsrapportering (coverage accounting). Metoden vi använde för att genomföra vårt mål baserades på hur täckningsrapportering i TortoiseFuzz betygsätter kodvägar för att hitta minneskorruptionsbuggar. Den letar helt enkelt efter funktioner som tenderar att orsaka sårbarheter och anser att fler är bättre. Vår idé var att föra in extra funktionsanrop till dessa minnesfunktioner inuti de fejkade kodgrenarna som blivit genererade av anti-fuzzningen. Syfte. Vår uppsats undersöker nuvarande anti-fuzzningstekniker för att evaluera vilket verktyg som vår kontring mot täckningsrapportering ska baseras på. Vi utför ett experiment där vi kör flera fuzzers på olika riktmärkesprogram för att utvärdera vårt verktyg. Metod. Den teoretiska grunden för anti-fuzzningsverktyget erhålls genom genomförandet av en litteraturstudie, med syfte att evaluera befintliga tekniker inom anti-fuzzning, och erhålla förståelse över hur täckningsrapportering prioriterar kodgrenar. Därefter kommer ett experiment att genomföras för att evaluera det framtagna verktyget. För att sedan evaluera vårt verktyg mot TortoiseFuzz kommer FuzzBench att användas, en homogen testmiljö utformad för att evaluera och jämföra fuzzers mot varandra. Den är utformad för att underlätta för vidare forskning, där reproduktion av ett experiment är enkelt, och resultat från tidigare forskning går att enkelt slå samman. Riktmärkesprogrammen som är representativa av verkliga program kommer väljas i denna plattform. Varje riktmärkesprogram kommer bli kopierad i tre versioner, originalet, ett som är skyddat av ett tidigare anti-fuzzningsverktyg, och ett skyddat av vårt nya anti-fuzzningsverktyg. Resultat. Detta experiment visade att vårt anti-fuzzningsverktyg framgångsrikt sänkte antalet unika funna buggar av TortoiseFuzz, även när riktmärkesprogrammen skyddades av ett tidigare anti-fuzzningsverktyg. Slutsatser. Vi drar slutsatsen, baserat på våra resultat, att vårt verktyg ser lovande ut mot en fuzzer som använder täckningsrapportering. Vidare studier kommer trycka på utvecklingen av fuzzers att bli ännu bättre för att överkomma nya anti-fuzzing-metoder.
|
27 |
SUNNYMILKFUZZER - AN OPTIMIZED FUZZER FOR JVM-BASED LANGUAGEJunyang Shao (16649343) 27 July 2023 (has links)
<p>This thesis presents an in-depth investigation into the opportunities of optimizing the performance (throughput) of fuzzing on Java Virtual Machine (JVM)-based languages. The study identifies five main areas for potential optimization, each of which contributes to the performance bottlenecks in the existing state-of-the-art Java fuzzer, Jazzer.</p>
<p><br></p>
<p>Firstly, the use of coverage probes is recognized as costly due to the native method call, including call frame generation and destruction, while it only performs a simple byte increment. Secondly, the probes may become exhausted, which subsequently cease to generate signals for new interesting inputs, while the associated costs persist. Thirdly, the scanning of the coverage map is expensive, particularly for targets with a large loaded bytecode. Given that test inputs can only execute a portion of these, the probes for most bytecodes are scanned repeatedly without generating any signals, indicating a need for a more structured coverage map design to skip the code probes effectively. Lastly, exception handling in JVM is costly as it automatically fills in the stack trace whenever an exception object is created, even when most targets don't utilize this information. </p>
<p><br></p>
<p>The study then designs and implements optimization techniques for these opportunities. We believe we provide the optimal solution for the first opportunity, while better optimizations could be proposed for the second, third, and fourth. The collective improvement brought about by these implementations is on average 138% and up to 441% in throughput. This work, thus, offers valuable insights into enhancing the efficiency of fuzz testing in JVM languages and paves the way for further research in optimizing other areas of JVM-based-language fuzzing performance.</p>
|
28 |
Architecture de Sécurité sur la Voix sur IPAbdelnur, Humberto 30 March 2009 (has links) (PDF)
Les solutions voix sur IP (VoIP) sont actuellement en plein essor et gagnent tous le jours de nouveaux marché en raison de leur faible coût et d'une palette de services riche. Comme la voix sur IP transite par l'Internet ou utilise ses protocoles, elle devient la cible de multiples attaques qui peuvent mettre son usage en péril. Parmis les menaces les plus dangereuses on trouve les bugs et les failles dans les implantations logicielles des équipements qui participent à la livraison de ces services. Cette thése comprend trois contributions à l'amélioration de la sécurité des logiciels. La première est une architecture d'audit de sécurité pour les services VoIP intégrant découverte, gestion des données et attaques à des fins de test. La seconde contribution consiste en la livraison d'une approche autonome de discrimination de signatures de messages permettant l'automatisation de la fonction de fingerprinting passif utilisée pour identifier de façon unique et non ambigüe la source d'un message. La troisième contribution porte sur la détection dynamique de vulnérabilités dans des états avancés d'une interaction protocolaire avec un équipement cible. L'expérience acquise dans la recherche de vulnérabilités dans le monde de la VoIP avec nos algorithmes est également partagée dans cette thèse.
|
29 |
Protection des systèmes informatiques contre les attaques par entrées-sorties / Protecting Computer Systems against Input/Output AttacksLone Sang, Fernand 27 November 2012 (has links)
Les attaques ciblant les systèmes informatiques vont aujourd'hui au delà de simples logiciels malveillants et impliquent de plus en plus des composants matériels. Cette thèse s'intéresse à cette nouvelle classe d'attaques et traite, plus précisément, des attaques par entrées-sorties qui détournent des fonctionnalités légitimes du matériel, tels que les mécanismes entrées-sorties, à différentes fins malveillantes. L'objectif est d'étudier ces attaques, qui sont extrêmement difficiles à détecter par des techniques logicielles classiques (dans la mesure où leur mise en oeuvre ne nécessite pas l'intervention des processeurs) afin de proposer des contre-mesures adaptées, basées sur des composants matériels fiables et incontournables. Ce manuscrit se concentre sur deux cas : celui des composants matériels qui peuvent être délibérément conçus pour être malveillants et agissants de la même façon qu'un programme intégrant un cheval de Troie ; et celui des composants matériels vulnérables qui ont été modifiés par un pirate informatique, localement ou au travers du réseau, afin d'y intégrer des fonctions malveillantes (typiquement, une porte dérobée dans son firmware). Pour identifier les attaques par entrées-sorties, nous avons commencé par élaborer un modèle d'attaques qui tient compte des différents niveaux d'abstraction d'un système informatique. Nous nous sommes ensuite appuyés sur ce modèle d'attaques pour les étudier selon deux approches complémentaires : une analyse de vulnérabilités traditionnelle, consistant à identifier une vulnérabilité, développer des preuves de concept et proposer des contre-mesures ; et une analyse de vulnérabilités par fuzzing sur les bus d'entrées-sorties, reposant sur un outil d'injection de fautes que nous avons conçu, baptisé IronHide, capable de simuler des attaques depuis un composant matériel malveillant. Les résultats obtenus pour chacunes de ces approches sont discutés et quelques contre-mesures aux vulnérabilités identifiées, basées sur des composants matériels existants, sont proposées / Nowadays, attacks against computer systems may involve hardware components in order to bypass the numerous countermeasures against malicious software. This PhD thesis focuses on this novel class of attacks and specifically deals with Input/Output attacks. In such attacks, attackers divert legitimate hardware features, such as I/O mechanisms, to achieve different malicious actions. Since detecting such attacks by conventional software techniques is not easy (as far as they do not require the intervention of the CPU), we have analyzed these attacks in order to propose appropriate countermeasures based mainly on reliable and unavoidable hardware components. This manuscript focuses on two cases : hardware components that can be deliberately designed to be malicious and acting in the same way as a program incorporating a Trojan horse ; and vulnerable hardware components that have been modified by a hacker, either locally or through the network, to include malicious functions (typically a backdoor in the firmware). To identify the potential I/O attacks, we developed an attack model which takes into account the different abstraction levels in a computer system. Then, we studied these attacks with two complementary approaches : the classical approach to vulnerability analysis consisting in identifying a vulnerability, developing a proof-of-concept and proposing countermeasures ; and fuzzing-based vulnerability analysis, using IronHide, a fault injection tool we have designed, which is able to simulate a powerful malicious hardware. The results obtained with both approaches are discussed and several countermeasures to the vulnerabilities we identified, based on existing hardware components, are proposed
|
30 |
Retrowrite: Statically Instrumenting COTS Binaries for Fuzzing and SanitizationSushant Dinesh (6640856) 10 June 2019 (has links)
<div>End users of closed-source software currently cannot easily analyze the security</div><div>of programs or patch them if flaws are found. Notably, end users can include devel</div><div>opers who use third party libraries. The current state of the art for coverage-guided</div><div>binary fuzzing or binary sanitization is dynamic binary translation, which results</div><div>in prohibitive overhead. Existing static rewriting techniques cannot fully recover</div><div>symbolization information, and so have difficulty modifying binaries to track code</div><div>coverage for fuzzing or add security checks for sanitizers.</div><div>The ideal solution for adding instrumentation is a static rewriter that can intel</div><div>ligently add in the required instrumentation as if it were inserted at compile time.</div><div>This requires analysis to statically disambiguate between references and scalars, a</div><div>problem known to be undecidable in the general case. We show that recovering this</div><div>information is possible in practice for the most common class of software and li</div><div>braries: 64 bit, position independent code. Based on our observation, we design a</div><div>binary-rewriting instrumentation to support American Fuzzy Lop (AFL) and Address</div><div>Sanitizer (ASan), and show that we achieve compiler levels of performance, while re</div><div>taining precision. Binaries rewritten for coverage-guided fuzzing using RetroWrite</div><div>are identical in performance to compiler-instrumented binaries and outperforms the</div><div>default QEMU-based instrumentation by 7.5x while triggering more bugs. Our im</div><div>plementation of binary-only Address Sanitizer is 3x faster than Valgrind memcheck,</div><div>the state-of-the-art binary-only memory checker, and detects 80% more bugs in our</div><div>security evaluation.</div>
|
Page generated in 0.0518 seconds