• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 53
  • 27
  • 25
  • 25
  • 14
  • 14
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Practical Feedback and Instrumentation Enhancements for Performant Security Testing of Closed-source Executables

Nagy, Stefan 25 May 2022 (has links)
The Department of Homeland Security reports that over 90% of cyberattacks stem from security vulnerabilities in software, costing the U.S. $109 billion dollars in damages in 2016 alone according to The White House. As NIST estimates that today's software contains 25 bugs for every 1,000 lines of code, the prompt discovery of security flaws is now vital to mitigating the next major cyberattack. Over the last decade, the software industry has overwhelmingly turned to a lightweight defect discovery approach known as fuzzing: automated testing that uncovers program bugs through repeated injection of randomly-mutated test cases. Academic and industry efforts have long exploited the semantic richness of open-source software to enhance fuzzing with fast and fine-grained code coverage feedback, as well as fuzzing-enhancing code transformations facilitated through lightweight compiler-based instrumentation. However, the world's increasing reliance on closed-source software (i.e., commercial, proprietary, and legacy software) demands analogous advances in automated security vetting beyond open-source contexts. Unfortunately, the semantic gaps between source code and opaque binary code leave fuzzing nowhere near as effective on closed-source targets. The difficulty of balancing coverage feedback speed and precision in binary executables leaves fuzzers frequently bottlenecked and orders-of-magnitude slower at uncovering security vulnerabilities in closed-source software. Moreover, the challenges of analyzing and modifying binary executables at scale leaves closed-source software fuzzing unable to fully leverage the sophisticated enhancements that have long accelerated open-source software vulnerability discovery. As the U.S. Cybersecurity and Infrastructure Security Agency reports that closed-source software makes up over 80% of the top routinely exploited software today, combating the ever-growing threat of cyberattacks demands new practical, precise, and performant fuzzing techniques unrestricted by the availability of source code. This thesis answers the following research questions toward enabling fast, effective fuzzing of closed-source software: 1. Can common-case fuzzing insights be exploited to more achieve low-overhead, fine-grained code coverage feedback irrespective of access to source code? 2. What properties of binary instrumentation are needed to extend performant fuzzing-enhancing program transformation to closed-source software fuzzing? In answering these questions, this thesis produces the following key innovations: A. The first code coverage techniques to enable fuzzing speed and code coverage greater than source-level fuzzing for closed-source software targets. (chapter 3) B. The first instrumentation platform to extend both compiler-quality code transformation and compiler-level speed to closed-source fuzzing contexts (chapter 4) / Doctor of Philosophy / The Department of Homeland Security reports that over 90% of cyberattacks stem from security vulnerabilities in software, costing the U.S. $109 billion dollars in damages in 2016 alone according to The White House. As NIST estimates that today's software contains 25 bugs for every 1,000 lines of code, the prompt discovery of security flaws is now vital to mitigating the next major cyberattack. Over the last decade, the software industry has overwhelmingly turned to lightweight defect discovery through automated testing, uncovering program bugs through the repeated injection of randomly-mutated test cases. Academic and industry efforts have long exploited the semantic richness of open-source software (i.e., software whose full internals are publicly available, interpretable, and changeable) to enhance testing with fast and fine-grained exploration feedback; as well as testing-enhancing program transformations facilitated during the process by which program executables are generated. However, the world's increasing reliance on closed-source software (i.e., software whose internals are opaque to anyone but its original developer) like commercial, proprietary, and legacy programs demands analogous advances in automated security vetting beyond open-source contexts. Unfortunately, the challenges of understanding programs without their full source information leaves testing nowhere near as effective on closed-source programs. The difficulty of balancing exploration feedback speed and precision in program executables leaves testing frequently bottlenecked and orders-of-magnitude slower at uncovering security vulnerabilities in closed-source software. Moreover, the challenges of analyzing and modifying program executables at scale leaves closed-source software testing unable to fully leverage the sophisticated enhancements that have long accelerated open-source software vulnerability discovery. As the U.S. Cybersecurity and Infrastructure Security Agency reports that closed-source software makes up over 80% of the top routinely exploited software today, combating the ever-growing threat of cyberattacks demands new practical, precise, and performant software testing techniques unrestricted by the availability of programs' source code. This thesis answers the following research questions toward enabling fast, effective fuzzing of closed-source software: 1. Can common-case testing insights be exploited to more achieve low-overhead, fine-grained exploration feedback irrespective of access to programs' source code? 2. What properties of program modification techniques are needed to extend performant testing-enhancing program transformations to closed-source programs? In answering these questions, this thesis produces the following key innovations: A. The first techniques enabling testing of closed-source programs with speed and exploration higher than on open-source programs. (chapter 3) B. The first platform to extend high-speed program transformations from open-source programs to closed-source ones (chapter 4)
12

Fuzz Testing Architecture Used for Vulnerability Detection in Wireless Systems

Mayhew, Stephen Richard 23 June 2022 (has links)
The wireless world of today is essential to the everyday life of millions of people. Wireless technology is evolving at a rapid pace that's speed outmatches what the previous testing can handle. This necessitates the need for smarter and faster testing methods. One of the recent fast and efficient testing methods is fuzz testing. Fuzz testing is the generation and injection of unexpected input called "fuzzed" input for a system by slightly changing a base input hundreds or even thousands of times and introducing each change into a system to observe its effects. In this thesis, we developed and implemented a fuzz testing architecture to test 5G wireless system vulnerabilities. The proposed design uses multiple open-source software to create a virtual wireless environment for testing the fuzzed inputs' effects on the wireless attach procedure. Having an accessible and adaptable fuzzing architecture to use with wireless networks will help against malicious parties. Due to 5G simulation technology still being developed and the cost of ready-made 5G testing equipment, the architecture was implemented in an LTE environment using the srsRAN LTE simulation software, the Boofuzz fuzzing software, and Wireshark packet capture software. The results show consistent effects of the fuzz testing on the outputs of the LTE eNB. We also include a discussion of our future suggestions to improve the proposed fuzzing architecture. / Master of Science / The persistence of the cellular network is essential to the everyday life of millions of people. Cell phones and cell towers play an important role in business, communication, and recreation across the globe. The speed of advancements made in phones and cell towers technology is outpacing the speed of security testing, increasing the possibility of system vulnerabilities and unexplored back-doors. To cover the security testing gap, different automated testing models are being researched and developed, one of which is fuzz testing. Fuzz testing is the generation and injection of unexpected input called "fuzzed" input for a system by slightly changing a base input hundreds or even thousands of times and introducing each change into a system to observe its effects. The fuzzing architecture proposed in this thesis is used to test for security flaws in wireless cellular networks. We implemented our fuzz testing model in a simulated 4G cellular network, where the results show the effectiveness of the model on tracing network vulnerabilities. The results of the experiment show consistent effects of the fuzz testing on a wireless system. A discussion of how the proposed model can be further improved for future work is added to the end of this thesis.
13

Preventing Vulnerabilities and MitigatingAttacks on the MQTT Protocol

Yara, Ahmad January 2020 (has links)
Syftet med denna studie är att undersöka och förstå hur säkerhetsöverträdelser kan förhindrasoch mitigeras i ett MQTT protokoll för att öka den överliggande säkerheten. Jag är särskiltintresserad av tekniker såsom Fuzzing, Fuzzy Logic och Machine Learning..För att undersöka syftet, analyserade och diskuterade jag tidigare implementationer avFuzzing, Fuzzy Logic och Machine Learning, i ett MQTT protokoll. Analysen visade attFuzzing ansågs vara en väldigt effektiv metod för att förhindra säkerhetsöverträdelser samtatt både Fuzzy Logic och Machine Learning var effektiva metoder för mitigering.Sammanfattningsvis, kan säkerhetsnivån i ett MQTT protokoll öka genom implementering avmetoder som används i syfte att förhindra och mitigera säkerhetsöverträdelser. Exempelviskan man först använda Fuzzing för att hitta och korrigera sårbarheter och därigenomförhindra dem. Därefter kan man antingen använda sig av Fuzzy Logic eller MachineLearning för att mitigera plötsliga attacker på MQTT protokollet när den är i produktion.Detta betyder att att utvecklaren kan kombinera metoder för att både förhindra och mitigeraöverträdelser i syfte att öka säkerhetsnivån i ett MQTT protokoll.
14

To Force a Bug : Extending Hybrid Fuzzing

Näslund, Johan, Nero, Henrik January 2020 (has links)
One of the more promising solutions for automated binary testing today is hybrid fuzzing, a combination of the two acknowledged approaches, fuzzing and symbolic execution, for detecting errors in code. Hybrid fuzzing is one of the pioneering works coming from the authors of Angr and Driller, opening up for the possibility for more specialized tools such as QSYM to come forth. These hybrid fuzzers are coverage guided, meaning they measure their success in how much code they have covered. This is a typical approach but, as with many, it is not flawless. Just because a region of code has been covered does not mean it has been fully tested. Some flaws depend on the context in which the code is being executed, such as double-free vulnerabilities. Even if the free routine has been invoked twice, it does not mean that a double-free bug has occurred. To cause such a vulnerability, one has to free the same memory chunk twice (without it being reallocated between the two invocations to free). In this research, we will extend one of the current state-of-the-art hybrid fuzzers, QSYM, which is an open source project. We do this extension, adding double-free detection, in a tool we call QSIMP. We will then investigate our hypothesis, stating that it is possible to implement such functionality without losing so much performance that it would make the tool impractical. To test our hypothesis we have designed two experiments. One experiment tests the ability of our tool to find double-free bugs (the type of context-sensitive bug that we have chosen to test with). In our second experiment, we explore the scalability of the tool when this functionality is executed. Our experiments showed that we were able to implement context-sensitive bug detection within QSYM. We can find most double-free vulnerabilities we have tested it on, although not all, because of some optimizations that we were unable to build past. This has been done with small effects on scalability according to our tests. Our tool can find the same bugs that the original QSYM while adding functionality to find double-free vulnerabilities. / En av de mer lovande lösningarna för automatiserad binärtestning är i dagsläget hybrid fuzzing, en kombination av två vedertagna tillvägagångssätt, fuzzing och symbolisk exekvering. Forskarna som utvecklade Angr och Driller anses ofta vara några av de första med att testa denna approach. Detta har i sin tur öppnat upp för fler mer specialiserade verktyg som QSYM. Dessa hybrid fuzzers mäter oftast sin framgång i hänsyn till hur mycket kod som nås under testningen. Detta är ett typiskt tillvägagångssätt, men som med många metoder är det inte felfri. Kod som har exekverats, utan att en bugg utlösts, är inte nödvändigtvis felfri. Vissa buggar beror på vilken kontext maskininstruktioner exekveras i -- ett exempel är double-free sårbarheter. Att minne har frigjorts flera gånger betyder inte ovillkorligen att en double-free sårbarhet har uppstått. För att en sådan sårbarhet ska uppstå måste samma minne frigöras flera gånger (utan att detta minne omallokerats mellan anropen till free). I detta projekt breddar vi en av de främsta hybrid fuzzers, QSYM, ett projekt med öppen källkod. Det vi tillför är detektering av double-free i ett verktyg vi kallar QSIMP. Vi undersöker sedan vår hypotes, som säger att det är möjligt att implementera sådan funktionalitet utan att förlora så mycket prestanda att det gör verktyget opraktiskt. För att bepröva hypotesen har vi designat två experiment. Ett experiment testar verktygets förmåga att detektera double-free sårbarheter (den sortens kontext-känsliga sårbarheter vi har valt att fokusera på). I det andra experimentet utforskar vi huruvida verktyget är skalbart då den nya funktionaliteten körs. Våra experiment visar att vi har möjliggjort detektering av kontext-känsliga buggar genom vidareutveckling av verktyget QSYM. QSIMP hittar double-free buggar, dock inte alla, på grund av optimiseringar som vi ej har lyckats arbeta runt. Detta har gjorts utan större effekter på skalbarheten av verktyget enligt resultaten från våra experiment. Vårt verktyg hittar samma buggar som orignal verktyget QSYM, samtidigt som vi tillägger funktionalitet för att hitta double-free sårbarheter.
15

Black-box analýza zabezpečení Wi-Fi / Black-Box Analysis of Wi-Fi Stacks Security

Venger, Adam January 2021 (has links)
Zariadenia, na ktoré sa každodenne spoliehame, sú stále zložitejšie a využívajú zložitejšie protokoly. Jedným z týchto protokolov je Wi-Fi. S rastúcou zložitosťou sa zvyšuje aj potenciál pre implementačné chyby. Táto práca skúma Wi-Fi protokol a použitie fuzz testingu pre generovanie semi-validných vstupov, ktoré by mohli odhaliť zraniteľné miesta v zariadeniach. Špeciálna pozornosť bola venovaná testovaniu Wi-Fi v systéme ESP32 a ESP32-S2. Výsledkom práce je fuzzer vhodný pre testovanie akéhokoľvek Wi-Fi zariadenia, monitorovací nástroj špeciálne pre ESP32 a sada testovacích programov pre ESP32. Nástroj neodhalil žiadne potenciálne zraniteľnosti.
16

A Framework for Software Security Testing and Evaluation

Dutta, Rahul Kumar January 2015 (has links)
Security in automotive industry is a thought of concern these days. As more smart electronic devices are getting connected to each other, the dependency on these devices are urging us to connect them with moving objects such as cars, buses, trucks etc. As such, safety and security issues related to automotive objects are becoming more relevant in the realm of internet connected devices and objects. In this thesis, we emphasize on certain factors that introduces security vulnerabilities in the implementation phase of Software Development Life Cycle (SDLC). Input invalidation is one of them that we address in our work. We implement a security evaluation framework that allows us to improve security in automotive software by identifying and removing software security vulnerabilities that arise due to input invalidation reasons during SDLC. We propose to use this framework in the implementation and testing phase so that the critical deficiencies of software in security by design issues could be easily addressed and mitigated.
17

Practical Type and Memory Safety Violation Detection Mechanisms

Yuseok Jeon (9217391) 29 August 2020 (has links)
System programming languages such as C and C++ are designed to give the programmer full control over the underlying hardware. However, this freedom comes at the cost of type and memory safety violations which may allow an attacker to compromise applications. In particular, type safety violation, also known as type confusion, is one of the major attack vectors to corrupt modern C++ applications. In the past years, several type confusion detectors have been proposed, but they are severely limited by high performance overhead, low detection coverage, and high false positive rates. To address these issues, we propose HexType and V-Type. First, we propose HexType, a tool that provides low-overhead disjoint metadata structures, compiler optimizations, and handles specific object allocation patterns. Thus, compared to prior work, HexType significantly improves detection coverage and reduces performance overhead. In addition, HexType discovers new type confusion bugs in real world programs such as Qt and Apache Xerces-C++. However, HexType still has considerable overhead from managing the disjoint metadata structure and tracking individual objects, and has false positives from imprecise object tracking, although HexType significantly reduces performance overhead and detection coverage. To address these issues, we propose a further advanced mechanism V-Type, which forcibly changes non-polymorphic types into polymorphic types to make sure all objects maintain type information. By doing this, V-Type removes the burden of tracking object allocation and deallocation and of managing a disjoint metadata structure, which reduces performance overhead and improves detection precision. Another major attack vector is memory safety violations, which attackers can take advantage of by accessing out of bound or deleted memory. For memory safety violation detection, combining a fuzzer with sanitizers is a popular and effective approach. However, we find that heavy metadata structure of current sanitizers hinders fuzzing effectiveness. Thus, we introduce FuZZan to optimize sanitizer metadata structures for fuzzing. Consequently, FuZZan improves fuzzing throughput, and this helps the tester to discover more unique paths given the same amount of time and to find bugs faster. In conclusion, my research aims to eliminate critical and common C/C++ memory and type safety violations through practical programming analysis techniques. For this goal, through these three projects, I contribute to our community to effectively detect type and memory safety violations.
18

Cybersecurity Testing and Intrusion Detection for Cyber-Physical Power Systems

Pan, Shengyi 13 December 2014 (has links)
Power systems will increasingly rely on synchrophasor systems for reliable and high-performance wide area monitoring and control (WAMC). Synchrophasor systems greatly use information communication technologies (ICT) for data exchange which are vulnerable to cyber-attacks. Prior to installation of a synchrophasor system a set of cyber security requirements must be developed and new devices must undergo vulnerability testing to ensure that proper security controls are in place to protect the synchrophasor system from unauthorized access. This dissertation describes vulnerability analysis and testing performed on synchrophasor system components. Two network fuzzing frameworks are proposed; for the I C37.118 protocol and for an energy management system (EMS). While fixing the identified vulnerabilities in information infrastructures is imperative to secure a power system, it is likely that successful intrusions will still occur. The ability to detect intrusions is necessary to mitigate the negative effects from a successful attacks. The emergence of synchrophasor systems provides real-time data with millisecond precision which makes the observation of a sequence of fast events feasible. Different power system scenarios present different patterns in the observed fast event sequences. This dissertation proposes a data mining approach called mining common paths to accurately extract patterns for power system scenarios including disturbances, control and protection actions and cyber-attacks from synchrophasor data and logs of system components. In this dissertation, such a pattern is called a common path, which is represented as a sequence of critical system states in temporal order. The process of automatically discovering common paths and building a state machine for detecting power system scenarios and attacks is introduced. The classification results show that the proposed approach can accurately detect these scenarios even with variation in fault locations and load conditions. This dissertation also describes a hybrid intrusion detection framework that employs the mining common path algorithm to enable a systematic and automatic IDS construction process. An IDS prototype was validated on a 2-line 3-bus power transmission system protected by the distance protection scheme. The result shows the IDS prototype accurately classifies 25 power system scenarios including disturbances, normal control operations, and cyber-attacks.
19

Practical Methods for Fuzzing Real-World Systems

Prashast Srivastava (15353365) 27 April 2023 (has links)
<p>The current software ecosystem is exceptionally complex. A key defining feature of this complexity is the vast input space that software applications must process. This feature</p> <p>inhibits fuzzing (an effective automated testing methodology) in uncovering deep bugs (i.e.,</p> <p>bugs with complex preconditions). We improve the bug-finding capabilities of fuzzers by</p> <p>reducing the input space that they have to explore. Our techniques incorporate domain</p> <p>knowledge from the software under test. In this dissertation, we research how to incorporate</p> <p>domain knowledge in different scenarios across a variety of software domains and test</p> <p>objectives to perform deep bug discovery.</p> <p>We start by focusing on language interpreters that form the backend of our web ecosystem.</p> <p>Uncovering deep bugs in these interpreters requires synthesizing inputs that perform a</p> <p>diverse set of semantic actions. To tackle this issue, we present Gramatron, a fuzzer that employs grammar automatons to speed up bug discovery. Then, we explore firmwares belonging to the rapidly growing IoT ecosystem which generally lack thorough testing. FirmFuzz infers the appropriate runtime state required to trigger vulnerabilities in these firmwares using the domain knowledge encoded in the user-facing network applications. Additionally, we showcase how our proposed strategy to incorporate domain knowledge is beneficial under alternative testing scenarios where a developer analyzes specific code locations, e.g., for patch testing. SieveFuzz leverages knowledge of targeted code locations to prohibit exploration of code regions and correspondingly parts of the input space that are irrelevant to reaching the target location. Finally, we move beyond the realm of memory-safety vulnerabilities and present how domain knowledge can be useful in uncovering logical bugs, specifically deserialization vulnerabilities in Java-based applications with Crystallizer. Crystallizer uses a hybrid analysis methodology to first infer an over-approximate set of possible payloads through static analysis (to constrain the search space). Then, it uses dynamic analysis to instantiate concrete payloads as a proof-of-concept of a deserialization vulnerability.</p> <p>Throughout these four diverse areas we thoroughly demonstrate how incorporating domain</p> <p>knowledge can massively improve bug finding capabilities. Our research has developed</p> <p>tooling that not only outperforms the existing state-of-the-art in terms of efficient bug discovery (with speeds up to 117% faster), but has also uncovered 18 previously unknown bugs,</p> <p>with five CVEs assigned.</p>
20

Implementation and testing of a blackbox and a whitebox fuzzer for file compression routines

Tobkin, Toby 01 May 2013 (has links)
Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program's source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any "thinking" about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library.

Page generated in 0.0581 seconds