• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 53
  • 27
  • 25
  • 25
  • 14
  • 14
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Investigating the Effectiveness of Forward-Porting Bugs

Nyquist, Fredrik January 2023 (has links)
This research investigates the effectiveness of the forward-porting approach employed in the Magma framework as a fault injection technique for evaluating fuzzers. The study aims to assess the use of Proof-of-Concepts in reproducing crashes in CVEs and evaluate the feasibility of forward-porting vulnerabilities into later software versions. An experiment was conducted using three selected open-source libraries to explore whether vulnerabilities could be triggered or reached in the latest versions through the forward-porting approach. The findings suggest that the forward-porting approach may not be the most effective method for injecting vulnerabilities into software systems. Out of the 22 chosen CVEs for analysis, only one could be triggered and two could be reached using the forward-porting approach. This indicates that many of the injected vulnerabilities become obsolete or have unsatisfiable trigger conditions in later versions. Additionally, manual verification of these vulnerabilities have been found to be time-consuming and challenging. Further research is necessary to provide a comprehensive evaluation of the effectiveness of the forward-porting approach in vulnerability injection.
32

APEX-ICS: Automated Protocol Exploration and Fuzzing For Closed-Source ICS Protocols

Parvin Kumar (15354694) 28 April 2023 (has links)
<p> A closed-source ICS communication is a fundamental component of supervisory software and PLCs operating critical infrastructure or configuring devices. As this is a vital communication, a compromised protocol can allow attackers to take over the entire critical infrastructure network and maliciously manipulate field device values. Thus, it is crucial to conduct security assessments of these closed-source protocol communications before deploying them in a production environment to ensure the safety of critical infrastructure. However, Fuzzing closed-source communication without understanding the protocol structure or state is ineffective, making testing such closed-source communications a challenging task.</p> <p><br> This research study introduces the APEX-ICS framework, which consists of two significant components: Automatic closed-source ICS protocol reverse-engineering and stateful black-box fuzzing. The former aims to reverse-engineer the protocol communication, which is critical to effectively performing the fuzzing technique. The latter component leverages the generated grammar to detect vulnerabilities in communication between supervisory software and PLCs. The framework prototype was implemented using the Codesys v3.0 closed-source protocol communication to conduct reverse engineering and fuzzing and successfully identified 4 previously unknown vulnerabilities, which were found to impact more than 400 manufacturer’s devices. </p>
33

Detecting Server-Side Web Applications with Unrestricted File Upload Vulnerabilities

Huang, Jin 01 September 2021 (has links)
No description available.
34

Fuzz Testing for Quality Control in Systems with Complex Input Data

Bodin, Josefin January 2023 (has links)
Fuzz testing is a testing technique used to generate a large amount of random or semi-random input data. This data is then fed to a target system which is then run with said data and monitored for anomalous behaviour. But as systems become increasingly complex, and as such, their input, fuzz testing becomes less efficient as pure randomisation no longer yields many useful results, and the long execution chains that may arise from complex systems create a demand for configurability in order to generate useful test data and make the testing efficient long-term. This thesis applies high-level configurability to a fuzz testing tool and tests this on a proprietary hard real-time operating system. The results show that this approach might not work all that well on the target system used during this thesis, but it is still believed that it is an approach to fuzz testing which may be useful in other regards.
35

GONet: Gradient Oriented Fuzzing for Stateful Network Protocol : Improving and Evaluating Fuzzing Efficiency of Stateful Protocol by Mutating Based on Gradient Information / GONet: Gradientorienterad Fuzzing för Tillståndsfullt Nätverksprotokoll : Förbättra och utvärdera fuzzing effektiviteten av stateful protokoll genom att mutera baserat på gradientinformation

Tao, Quanyu January 2023 (has links)
Network protocol plays a crucial role in supporting a wide range of critical services, of which robustness and reliability are vital. Fuzzing, or fuzz testing, serves as an effective technique to uncover vulnerabilities in software programs. However, fuzzing becomes more complicated when dealing with network protocols due to their massive state. In stateful networks, the current state can be significantly influenced by previous communication, which makes most test cases invalid and leads to low fuzzing efficiency. To overcome these challenges, this thesis proposes GONET, a novel grey-box fuzzer meticulously designed for stateful network protocols. GONET distinguishes itself from other fuzzers by introducing a state space and ingeniously utilizing gradient information from neural networks to guide the seed mutation process. This innovative approach significantly enhances the quality of test cases and leads to a marked increase in fuzzing efficiency. Furthermore, GONET’s lightweight structure enables faster execution of individual fuzzing tests, further enhancing its overall efficiency. Working on two popular protocol implementations, GONET demonstrates its superiority by outperforming two popular stateful protocol fuzzers (AFLNet and AFLNwe) in evaluation metrics of edge coverage and fuzzing speed. / Nätverksprotokoll spelar en avgörande roll för att stödja en mängd kritiska tjänster, där robusthet och tillförlitlighet är av stor betydelse. Fuzzing, eller fuzz-testning, fungerar som en effektiv teknik för att upptäcka sårbarheter i mjukvaruprogram. Men fuzzing blir mer komplicerat när man hanterar nätverksprotokoll på grund av deras massiva tillstånd. I tillståndsbundna nätverk kan det nuvarande tillståndet påverkas betydligt av tidigare kommunikation, vilket gör att de flesta testfall blir ogiltiga och leder till låg fuzzing-effektivitet. För att övervinna dessa utmaningar föreslår denna avhandling GONET, en ny grey-box fuzzer noggrant utformad för tillståndsbundna nätverksprotokoll. GONET skiljer sig från andra fuzzers genom att introducera ett tillståndsrum och genom att skickligt använda gradientinformation från neurala nätverk för att styra seed-mutationsprocessen. Denna innovativa strategi förbättrar avsevärt kvaliteten på testfallen och leder till en betydande ökning av fuzzing-effektiviteten. Dessutom gör GONET:s lätta struktur att enskilda fuzzingtester kan utföras snabbare, vilket ytterligare förbättrar dess totala effektivitet. När det gäller två populära protokollimplementeringar visar GONET sin överlägsenhet genom att överträffa två populära fuzzers för tillståndsbundna protokoll (AFLNet och AFLNwe) i utvärderingsmetriker för kantöverdrag och fuzzing-hastighet.
36

Detecting and mitigating software security vulnerabilities through secure environment programming

Blair, William 26 March 2024 (has links)
Adversaries continue to exploit software in order to infiltrate organizations’ networks, extract sensitive information, and hijack control of computing resources. Given the grave threat posed by unknown security vulnerabilities, continuously monitoring for vulnerabilities during development and evidence of exploitation after deployment is now standard practice. While the tools that perform this analysis and monitoring have evolved significantly in the last several decades, many approaches require either directly modifying a program’s source code or its intermediate representation. In this thesis, I propose methods for efficiently detecting and mitigating security vulnerabilities in software without requiring access to program source code or instrumenting individual programs. At the core of this thesis is a technique called secure environment programming (SEP). SEP enhances execution environments, which may be CPUs, language interpreters, or computing clouds, to detect security vulnerabilities in production software artifacts. Furthermore, environment based security features allow SEP to mitigate certain memory corruption and system call based attacks. This thesis’ key insight is that a program’s execution environment may be augmented with functionality to detect security vulnerabilities or protect workloads from specific attack vectors. I propose a novel vulnerability detection technique called micro-fuzzing which automatically detects algorithmic complexity (AC) vulnerabilities in both time and space. The detected bugs and vulnerabilities were confirmed by vendors of real-world Java libraries. Programs implemented in memory unsafe languages like C/C++ are popular targets for memory corruption exploits. In order to protect programs from these exploits, I enhance memory allocators with security features available in modern hardware environments. I use efficient hash algorithm implementations and memory protection keys (MPKs) available on recent CPUs to enforce security policies on application memory. Finally, I deploy a microservice-aware policy monitor (MPM) that detects security policy deviations in container telemetry. These security policies are generated from binary analysis over container images. Embedding MPMs derived from binary analysis in micro-service environments allows operators to detect compromised components without modifying container images or incurring high performance overhead. Applying SEP at varying levels of the computing stack, from individual programs to popular micro-service architectures, demonstrates that SEP efficiently protects diverse workloads without requiring program source or instrumentation.
37

Hybrid Differential Software Testing

Noller, Yannic 16 October 2020 (has links)
Differentielles Testen ist ein wichtiger Bestandteil der Qualitätssicherung von Software, mit dem Ziel Testeingaben zu generieren, die Unterschiede im Verhalten der Software deutlich machen. Solche Unterschiede können zwischen zwei Ausführungspfaden (1) in unterschiedlichen Programmversionen, aber auch (2) im selben Programm auftreten. In dem ersten Fall werden unterschiedliche Programmversionen mit der gleichen Eingabe untersucht, während bei dem zweiten Fall das gleiche Programm mit unterschiedlichen Eingaben analysiert wird. Die Regressionsanalyse, die Side-Channel Analyse, das Maximieren der Ausführungskosten eines Programms und die Robustheitsanalyse von Neuralen Netzwerken sind typische Beispiele für differentielle Softwareanalysen. Eine besondere Herausforderung liegt in der effizienten Analyse von mehreren Programmpfaden (auch über mehrere Programmvarianten hinweg). Die existierenden Ansätze sind dabei meist nicht (spezifisch) dafür konstruiert, unterschiedliches Verhalten präzise hervorzurufen oder sind auf einen Teil des Suchraums limitiert. Diese Arbeit führt das Konzept des hybriden differentiellen Software Testens (HyDiff) ein: eine hybride Analysetechnik für die Generierung von Eingaben zur Erkennung von semantischen Unterschieden in Software. HyDiff besteht aus zwei parallel laufenden Komponenten: (1) einem such-basierten Ansatz, der effizient Eingaben generiert und (2) einer systematischen Analyse, die auch komplexes Programmverhalten erreichen kann. Die such-basierte Komponente verwendet Fuzzing geleitet durch differentielle Heuristiken. Die systematische Analyse basiert auf Dynamic Symbolic Execution, das konkrete Eingaben bei der Analyse integrieren kann. HyDiff wird anhand mehrerer Experimente evaluiert, die in spezifischen Anwendungen im Bereich des differentiellen Testens ausgeführt werden. Die Resultate zeigen eine effektive Generierung von Testeingaben durch HyDiff, wobei es sich signifikant besser als die einzelnen Komponenten verhält. / Differential software testing is important for software quality assurance as it aims to automatically generate test inputs that reveal behavioral differences in software. The concrete analysis procedure depends on the targeted result: differential testing can reveal divergences between two execution paths (1) of different program versions or (2) within the same program. The first analysis type would execute different program versions with the same input, while the second type would execute the same program with different inputs. Therefore, detecting regression bugs in software evolution, analyzing side-channels in programs, maximizing the execution cost of a program over multiple executions, and evaluating the robustness of neural networks are instances of differential software analysis with the goal to generate diverging executions of program paths. The key challenge of differential software testing is to simultaneously reason about multiple program paths, often across program variants, in an efficient way. Existing work in differential testing is often not (specifically) directed to reveal a different behavior or is limited to a subset of the search space. This PhD thesis proposes the concept of Hybrid Differential Software Testing (HyDiff) as a hybrid analysis technique to generate difference revealing inputs. HyDiff consists of two components that operate in a parallel setup: (1) a search-based technique that inexpensively generates inputs and (2) a systematic exploration technique to also exercise deeper program behaviors. HyDiff’s search-based component uses differential fuzzing directed by differential heuristics. HyDiff’s systematic exploration component is based on differential dynamic symbolic execution that allows to incorporate concrete inputs in its analysis. HyDiff is evaluated experimentally with applications specific for differential testing. The results show that HyDiff is effective in all considered categories and outperforms its components in isolation.
38

Fuzzing Radio Resource Control messages in 5G and LTE systems : To test telecommunication systems with ASN.1 grammar rules based adaptive fuzzer / Fuzzing Radio Resource Control-meddelanden i 5Goch LTE-system

Potnuru, Srinath January 2021 (has links)
5G telecommunication systems must be ultra-reliable to meet the needs of the next evolution in communication. The systems deployed must be thoroughly tested and must conform to their standards. Software and network protocols are commonly tested with techniques like fuzzing, penetration testing, code review, conformance testing. With fuzzing, testers can send crafted inputs to monitor the System Under Test (SUT) for a response. 3GPP, the standardization body for the telecom system, produces new versions of specifications as part of continuously evolving features and enhancements. This leads to many versions of specifications for a network protocol like Radio Resource Control (RRC), and testers need to constantly update the testing tools and the testing environment. In this work, it is shown that by using the generic nature of RRC specifications, which are given in Abstract Syntax Notation One (ASN.1) description language, one can design a testing tool to adapt to all versions of 3GPP specifications. This thesis work introduces an ASN.1 based adaptive fuzzer that can be used for testing RRC and other network protocols based on ASN.1 description language. The fuzzer extracts knowledge about ongoing RRC messages using protocol description files of RRC, i.e., RRC ASN.1 schema from 3GPP, and uses the knowledge to fuzz RRC messages. The adaptive fuzzer identifies individual fields, sub-messages, and custom data types according to specifications when mutating the content of existing messages. Furthermore, the adaptive fuzzer has identified a previously unidentified vulnerability in Evolved Packet Core (EPC) of srsLTE and openLTE, two open-source LTE implementations, confirming the applicability to robustness testing of RRC and other network protocols. / 5G-telekommunikationssystem måste vara extremt tillförlitliga för att möta behoven för den kommande utvecklingen inom kommunikation. Systemen som används måste testas noggrant och måste överensstämma med deras standarder. Programvara och nätverksprotokoll testas ofta med tekniker som fuzzing, penetrationstest, kodgranskning, testning av överensstämmelse. Med fuzzing kan testare skicka utformade input för att övervaka System Under Test (SUT) för ett svar. 3GPP, standardiseringsorganet för telekomsystemet, producerar ofta nya versioner av specifikationer för att möta kraven och bristerna från tidigare utgåvor. Detta leder till många versioner av specifikationer för ett nätverksprotokoll som Radio Resource Control (RRC) och testare behöver ständigt uppdatera testverktygen och testmiljön. I detta arbete visar vi att genom att använda den generiska karaktären av RRC-specifikationer, som ges i beskrivningsspråket Abstract Syntax Notation One (ASN.1), kan man designa ett testverktyg för att anpassa sig till alla versioner av 3GPP-specifikationer. Detta uppsatsarbete introducerar en ASN.1-baserad adaptiv fuzzer som kan användas för att testa RRC och andra nätverksprotokoll baserat på ASN.1- beskrivningsspråk. Fuzzer extraherar kunskap om pågående RRC meddelanden med användning av protokollbeskrivningsfiler för RRC, dvs RRC ASN.1 schema från 3GPP, och använder kunskapen för att fuzz RRC meddelanden. Den adaptiva fuzzer identifierar enskilda fält, delmeddelanden och anpassade datatyper enligt specifikationer när innehållet i befintliga meddelanden muteras. Dessutom har den adaptiva fuzzer identifierat en tidigare oidentifierad sårbarhet i Evolved Packet Core (EPC) för srsLTE och openLTE, två opensource LTE-implementeringar, vilket bekräftar tillämpligheten för robusthetsprovning av RRC och andra nätverksprotokoll.
39

Static Analysis Of Client-Side JavaScript Code To Detect Server-Side Business Logic Vulnerabilities / Statisk analys av JavaScript-kod på klientsidan för att upptäcka sårbarheter i affärslogiken på serversidan

van der Windt, Frederick January 2023 (has links)
In the real world, web applications are crucial in various domains, from e-commerce to finance and healthcare. However, these applications are not immune to vulnerabilities, particularly in business logic. Detecting such vulnerabilities can be challenging due to the complexity and diversity of application functionality. Consequently, there is a growing need for automated tools and techniques to aid in identifying business logic vulnerabilities. This research study investigates the efficacy of static analysis techniques in detecting server-side business logic vulnerabilities through the analysis of client-side JavaScript code. The study explores various analysis techniques, including code parsing, data flow analysis as detection methods, and their application in identifying potential vulnerabilities. This thesis also identifies common flaws contributing to business logic vulnerabilities, such as insufficient input validation, insecure access controls, and flawed decision-making logic. The effectiveness of static analysis techniques in pinpointing server-side business logic vulnerabilities is evaluated, revealing promising results, particularly in detecting parameter manipulation vulnerabilities. Notably, the study discovered vulnerabilities in two live applications that could lead to severe financial problems, underscoring the real-world implications of these vulnerabilities. However, challenges such as false positives and the need for manual verification are also acknowledged. The study concludes by proposing improvements and future research directions, including exploring advanced techniques like machine learning and natural language processing and integrating dynamic analysis and real-world testing scenarios to enhance the accuracy and efficiency of static analysis. The findings contribute to the understanding of utilizing static analysis techniques for detecting server-side business logic vulnerabilities, offering insights for developing more robust and efficient vulnerability detection tools. / I den verkliga världen är webbapplikationer avgörande inom olika områden, från e-handel till finans och sjukvård. Dessa applikationer är dock inte immuna mot sårbarheter, särskilt inte i affärslogiken. Att upptäcka sådana sårbarheter kan vara en utmaning på grund av komplexiteten och mångfalden i applikationernas funktionalitet. Därför finns det ett växande behov av automatiserade verktyg och tekniker som kan hjälpa till att identifiera sårbarheter i affärslogiken. Denna forskningsstudie undersöker hur effektiva statiska analystekniker är för att upptäcka sårbarheter i affärslogiken på serversidan genom analys av JavaScript-kod på klientsidan. Studien utforskar olika analystekniker, inklusive kodparsing, dataflödesanalys som detektionsmetoder, och deras tillämpning för att identifiera potentiella sårbarheter. Avhandlingen identifierar också vanliga brister som bidrar till sårbarheter i affärslogiken, såsom otillräcklig validering av indata, osäkra åtkomstkontroller och bristfällig logik för beslutsfattande. Effektiviteten hos statiska analystekniker för att hitta sårbarheter i affärslogiken på serversidan utvärderas och visar på lovande resultat, särskilt när det gäller att upptäcka sårbarheter i parametermanipulation. I studien upptäcktes sårbarheter i två live-applikationer som kan leda till allvarliga ekonomiska problem, vilket understryker de verkliga konsekvenserna av dessa sårbarheter. Utmaningar som falska positiva resultat och behovet av manuell verifiering erkänns dock också. Studien avslutas med förslag på förbättringar och framtida forskningsinriktningar, inklusive utforskning av avancerade tekniker som maskininlärning och naturlig språkbehandling och integrering av dynamisk analys och verkliga testscenarier för att förbättra noggrannheten och effektiviteten hos statisk analys. Resultaten bidrar till förståelsen för att använda statiska analystekniker för att upptäcka sårbarheter i affärslogik på serversidan, och ger insikter för att utveckla mer robusta och effektiva verktyg för sårbarhetsdetektering.
40

INFERENCE OF RESIDUAL ATTACK SURFACE UNDER MITIGATIONS

Kyriakos K Ispoglou (6632954) 14 May 2019 (has links)
<div>Despite the broad diversity of attacks and the many different ways an adversary can exploit a system, each attack can be divided into different phases. These phases include the discovery of a vulnerability in the system, its exploitation and the achieving persistence on the compromised system for (potential) further compromise and future access. Determining the exploitability of a system –and hence the success of an attack– remains a challenging, manual task. Not only because the problem cannot be formally defined but also because advanced protections and mitigations further complicate the analysis and hence, raise the bar for any successful attack. Nevertheless, it is still possible for an attacker to circumvent all of the existing defenses –under certain circumstances.</div><div><br></div><div>In this dissertation, we define and infer the Residual Attack Surface on a system. That is, we expose the limitations of the state-of-the-art mitigations, by showing practical ways to circumvent them. This work is divided into four parts. It assumes an attack with three phases and proposes new techniques to infer the Residual Attack Surface on each stage.</div><div><br></div><div>For the first part, we focus on the vulnerability discovery. We propose FuzzGen, a tool for automatically generating fuzzer stubs for libraries. The synthesized fuzzers are target specific, thus resulting in high code coverage. This enables developers to expose and fix vulnerabilities (that reside deep in the code and require initializing a complex state to trigger them), before they can be exploited. We then move to the vulnerability exploitation part and we present a novel technique called Block Oriented Programming (BOP), that automates data-only attacks. Data-only attacks defeat advanced control-flow hijacking defenses such as Control Flow Integrity. Our framework, called BOPC, maps arbitrary exploit payloads into execution traces and encodes them as a set of memory writes. Therefore an attacker’s intended execution “sticks” to the execution flow of the underlying binary and never departs from it. In the third part of the dissertation, we present an extension of BOPC that presents some measurements that give strong indications of what types of exploit payloads are not possible to execute. Therefore, BOPC enables developers to test what data an attacker would compromise and enables evaluation of the Residual Attack Surface to assess an application’s risk. Finally, for the last part, which is to achieve persistence on the compromised system, we present a new technique to construct arbitrary malware that evades current dynamic and behavioral analysis. The desired malware is split into hundreds (or thousands) of little pieces and each piece is injected into a different process. A special emulator coordinates and synchronizes the execution of all individual pieces, thus achieving a “distributed execution” under multiple address spaces. malWASH highlights weaknesses of current dynamic and behavioral analysis schemes and argues for full-system provenance.</div><div><br></div><div>Our envision is to expose all the weaknesses of the deployed mitigations, protections and defenses through the Residual Attack Surface. That way, we can help the research community to reinforce the existing defenses, or come up with new, more effective ones.</div>

Page generated in 0.4418 seconds