1 |
Finding and remedying high-level security issues in binary codeDewey, David Bryan 07 January 2016 (has links)
C++ and Microsoft's Component Object Model (COM) are examples of a high- level lan- guage and development framework that were built on top of the lower-level, primitive lan- guage, C. C was never designed to support concepts like object orientation, type enforcement, and language independence. Further, these languages and frameworks are designed to com- pile and run directly on the processor where these concepts are also not supported. Other high-level languages that do support these concepts make use of a runtime or virtual machine to create a computing model to suit their needs. By forcing these high-level concepts into a primitive computing model, many security issues have been introduced. Existing binary- level security analysis tools and runtime enforcement frameworks operate at the lowest level of context. As such, they struggle to detect and remedy higher-level security issues. In this dissertation, a framework for elevating the context of binary code is presented. By bringing the context for analysis closer to where these security issues are introduced, this framework allows for higher-level analyses and enforcement frameworks to be developed.
|
2 |
Formal fault injection vulnerability detection in binaries : a software process and hardware validation / Détection formelle de vulnérabilité créée par injection de faute au niveau binaire : un processus logiciel et une validation matérielleJafri, Nisrine 25 March 2019 (has links)
L'injection de faute est une méthode bien connue pour évaluer la robustesse et détecter les vulnérabilités des systèmes. La détection des vulnérabilités créées par injection de fautes a été approchée par différentes méthodes. Dans la littérature deux approches existent: les approches logicielles et les approches matérielles. Les approches logicielles peuvent fournir une large et rapide couverture, mais ne garantissent pas la présence de vulnérabilité dans le système. Les approches matérielles sont incontestables dans leurs résultats, mais nécessitent l’utilisation de matériaux assez coûteux et un savoir-faire approfondi, qui ne permet tout de même pas dans la majorité des cas de confirmer le modèle de faute représentant l'effet créé. Dans un premier lieu, cette thèse se concentre sur l'approche logicielle et propose une approche automatisée qui emploie les techniques de la vérification formelle pour détecter des vulnérabilités créées par injection de faute au niveau binaire. L'efficacité de cette approche est montrée en l'appliquant à des algorithmes de cryptographie implémentés dans les systèmes embarqués. Dans un second lieu, cette thèse établit un rapprochement entre les deux approches logicielles et matérielles sur la détection de vulnérabilité d'injection de faute en comparant les résultats des expériences des deux approches. Ce rapprochement des deux approches démontre que: toutes les vulnérabilités détectées par l'approche logicielle ne peuvent pas être reproduites dans le matériel; les conjectures antérieures sur le modèle de faute par des attaques d'impulsion électromagnétique ne sont pas précises ; et qu’il y a un lien entre les résultats de l’approche logicielle et l'approche matérielle. De plus, la combinaison des deux approches peut rapporter une approche plus précise et plus efficace pour détecter les vulnérabilités qui peuvent être créées par injection de faute. / Fault injection is a well known method to test the robustness and security vulnerabilities of systems. Detecting fault injection vulnerabilities has been approached with a variety of different but limited methods. Software-based and hardware-based approaches have both been used to detect fault injection vulnerabilities. Software-based approaches can provide broad and rapid coverage, but may not correlate with genuine hardware vulnerabilities. Hardware-based approaches are indisputable in their results, but rely upon expensive expert knowledge, manual testing, and can not confirm what fault model represent the created effect. First, this thesis focuses on the software-based approach and proposes a general process that uses model checking to detect fault injection vulnerabilities in binaries. The efficacy and scalability of this process is demonstrated by detecting vulnerabilities in different cryptographic real-world implementations. Then, this thesis bridges software-based and hardware-based fault injection vulnerability detection by contrasting results of the two approaches. This demonstrates that: not all software-based vulnerabilities can be reproduced in hardware; prior conjectures on the fault model for electromagnetic pulse attacks may not be accurate; and that there is a relationship between software-based and hardware-based approaches. Further, combining both software-based and hardware-based approaches can yield a vastly more accurate and efficient approach to detect genuine fault injection vulnerabilities.
|
3 |
LLVM-IR based DecompilationIlsoo, Jeon 06 June 2019 (has links)
No description available.
|
4 |
From Bytecode to Safety : Decompiling Smart Contracts for Vulnerability AnalysisDarwish, Malek January 2024 (has links)
This thesis investigated the use of Large Language Models (LLMs) for vulnerability analysis of decompiled smart contracts. A controlled experiment was conducted in which an automated system was developed to decompile smart contracts using two decompilers: Dedaub and Heimdall-rs, and subsequently analyze them using three LLMs: OpenAI’s GPT-4 and GPT-3.5, as well as Meta’s CodeLlama. The study focuses on assessing the effectiveness of the LLMs at identifying a range of vulnerabilities. The evaluation method included the collection and comparative analysis of performance and evaluative metrics such as the precision, recall and F1-scores. Our results show the LLM-decompiler pairing of Dedaub and GPT-4 to exhibit impressive detection capabilities across a range of vulnerabilities while failing to detect some vulnerabilities at which CodeLlama excelled. We demonstrated the potential of LLMs to improve smart contract security and sets the stage for future research to further expand on this domain.
|
5 |
Using a Web Server Test Bed to Analyze the Limitations of Web Application Vulnerability ScannersShelly, David Andrew 17 September 2010 (has links)
The threat of cyber attacks due to improper security is a real and evolving danger. Corporate and personal data is breached and lost because of web application vulnerabilities thousands of times every year. The large number of cyber attacks can partially be attributed to the fact that web application vulnerability scanners are not used by web site administrators to scan for flaws. Web application vulnerability scanners are tools that can be used by network administrators and security experts to help prevent and detect vulnerabilities such as SQL injection, buffer overflows, cross-site scripting, malicious file execution, and session hijacking.
However, these tools have been found to have flaws and limitations as well. Research has shown that web application vulnerability scanners are not capable of always detecting vulnerabilities and attack vectors, and do not give effective measurements of web application security. This research presents a method to analyze the flaws and limitations of several of the most popular commercial and free/open-source web application scanners by using a secure and insecure version of a custom-built web application. Using this described method, key improvements that should be made to web application scanner techniques to reduce the number of false-positive and false-negative results are proposed. / Master of Science
|
6 |
Detection of Prototype Pollution Using Joern : Joern’s Detection Capability Compared to CodeQL’s / Detektering av prototypförorening med hjälp av Joern : Joerns detekteringsförmåga jämfört med CodeQL:sFröberg, Tobias January 2023 (has links)
JavaScript-built programs are widely used by the general public, but they are also vulnerable to JavaScript-related exploits stemming from the newly discovered prototype pollution vulnerability. Research has been focused on understanding the impact of this vulnerability and finding ways to detect it using code analysis tools. However, current tools have difficulty achieving both high accuracy and completeness, and many do not provide out-of-thebox support for detecting prototype pollution. This creates the possibility of tools with no out-of-the-box support for the vulnerability potentially being better suited for different environments and scenarios than the currently employed state-of-the-art. This thesis aggregates the existing knowledge about prototype pollution detection and examines the detection capability of Joern, a code analysis tool that does not have out-of-the-box support for prototype pollution detection, by comparing it to the state-of-the-art tool CodeQL. The comparison is made by analyzing their ability to detect prototype pollution in vulnerable Node.js packages. Both tools use queries to analyze code. An implemented Joern query is compared to prototype pollution queries included in CodeQL, as well as a CodeQL query taken from the literature. The results show that Joern is capable of identifying prototype pollution vulnerabilities but also wrongly reports more places as vulnerable than it correctly identifies. The same issue was found with the CodeQL query taken from the literature, which also found more vulnerabilities than the implemented Joern query. However, the implemented Joern query could identify a larger number of vulnerabilities in the dataset than the included CodeQL queries. Joern’s reasons for the misclassification of code as (non)vulnerable were identified as JavaScript constructs/features not being correctly modeled, bugs in the tool, and difficulty in differentiating data structures from each other. In conclusion, Joern can be used to detect prototype pollution vulnerabilities but requires further development and research to improve its detection capability. / JavaScript-byggda program används dagligen av allmänheten, men dessa program är sårbara för olika JavaScript-relaterade angrepp möjliggjord av den nyligen upptäckta sårbarheten prototypförorening. Tidigare forskning har fokuserat på att förstå dess konsekvenser i kombination med sätt att detektera sårbarheten med hjälp av verktyg för kodanalys. Nuvarande verktyg har svårt att detektera alla sårbarheter med en hög noggrannhet, och många ger inte stöd för att detektera prototypförorening som standard. Detta skapar möjligheten för verktyg utan inbyggt stöd för sårbarheten att potentiellt vara bättre lämpade i olika miljöer och scenarier än nuvarande state-of-the-art. Denna rapport sammanfattar den befintliga kunskapen om detektion av prototypförorening och undersöker detekteringsförmågan hos Joern, ett verktyg för kodanalys som inte har inbyggt stöd för att detektera prototypförorening, genom att jämföra den med state-of-the-art-verktyget CodeQL. Jämförelsen görs genom att analysera verktygens förmåga att detektera prototypförorening i sårbara Node.js-paket. Båda verktygen använder queries för att analysera kod. En implementerad Joern-query jämförs med queries som ingår i CodeQL, samt med en CodeQL-query som hämtas från litteraturen. Resultaten visar att Joern kan identifiera prototypförorening men rapporterar även felaktigt fler platser som sårbara än den korrekt identifierar. CodeQL-queryn som hämtades från litteraturen hade samma problem, dock kunde den hitta fler sårbarheter än den implementerade Joern-queryn. Den implementerade Joern-queryn hittade istället ett större antal sårbarheter i datasetet än de queries som var inkluderade i CodeQL. Joerns anledningar för felklassificering av kodrader som (icke)sårbara identifierades som att vissa JavaScript-konstruktioner modelleras felaktigt, buggar i verktyget och svårigheter att skilja mellan datastrukturer. Sammanfattningsvis kan Joern användas för att detektera prototypförorening men kräver ytterligare utveckling och forskning för att förbättra dess detekteringsförmåga.
|
7 |
Two complementary approaches to detecting vulnerabilities in C programs / Deux approches complémentaires pour la détection de vulnérabilités dans les programmes CJimenez, Willy 04 October 2013 (has links)
De manière générale, en informatique, les vulnérabilités logicielles sont définies comme des cas particuliers de fonctionnements non attendus du système menant à la dégradation des propriétés de sécurité ou à la violation de la politique de sécurité. Ces vulnérabilités peuvent être exploitées par des utilisateurs malveillants comme brèches de sécurité. Comme la documentation sur les vulnérabilités n'est pas toujours disponible pour les développeurs et que les outils qu'ils utilisent ne leur permettent pas de les détecter et les éviter, l'industrie du logiciel continue à être paralysée par des failles de sécurité. Nos travaux de recherche s'inscrivent dans le cadre du projet Européen SHIELDS et portent sur les techniques de modélisation et de détection formelles de vulnérabilités. Dans ce domaine, les approches existantes sont peu nombreuses et ne se basent pas toujours sur une modélisation formelle précise des vulnérabilités qu'elles traitent. De plus, les outils de détection sous-jacents produisent un nombre conséquent de faux positifs/négatifs. Notons également qu'il est assez difficile pour un développeur de savoir quelles vulnérabilités sont détectées par chaque outil vu que ces derniers sont très peu documentés. En résumé, les contributions réalisées dans le cadre de cette thèse sont les suivantes: Définition d'un formalisme tabulaire de description de vulnérabilités appelé template. Définition d'un langage formel, appelé Condition de Détection de Vulnérabilité (VDC). Une approche de génération de VDCs à partir des templates. Définition d'une approche de détection de vulnérabilités combinant le model checking et l'injection de fautes. Évaluation des deux approches / In general, computer software vulnerabilities are defined as special cases where an unexpected behavior of the system leads to the degradation of security properties or the violation of security policies. These vulnerabilities can be exploited by malicious users or systems impacting the security and/or operation of the attacked system. Since the literature on vulnerabilities is not always available to developers and the used tools do not allow detecting and avoiding them; the software industry continues to be affected by security breaches. Therefore, the detection of vulnerabilities in software has become a major concern and research area. Our research was done under the scope of the SHIELDS European project and focuses specifically on modeling techniques and formal detection of vulnerabilities. In this area, existing approaches are limited and do not always rely on a precise formal modeling of the vulnerabilities they target. Additionally detection tools produce a significant number of false positives/negatives. Note also that it is quite difficult for a developer to know what vulnerabilities are detected by each tool because they are not well documented. Under this context the contributions made in this thesis are: Definition of a formalism called template. Definition of a formal language, called Vulnerability Detection Condition (VDC), which can accurately model the occurrence of a vulnerability. Also a method to generate VDCs from templates has been defined. Defining a second approach for detecting vulnerabilities which combines model checking and fault injection techniques. Experiments on both approaches
|
8 |
Detecting Security Patches in Java OSS Projects Using NLPStefanoni, Andrea January 2022 (has links)
The use of Open Source Software is becoming more and more popular, but it comes with the risk of importing vulnerabilities in private codebases. Security patches, providing fixes to detected vulnerabilities, are vital in protecting against cyber attacks, therefore being able to apply all the security patches as soon as they are released is key. Even though there is a public database for vulnerability fixes the majority of them remain undisclosed to the public, therefore we propose a Machine Learning algorithm using NLP to detect security patches in Java Open Source Software. To train the model we preprocessed and extract patches from the commits present in two databases provided by Debricked and a public one released by Ponta et al. [57]. Two experiments were conducted, one performing binary classification and the other trying to have higher granularity classifying the macro-type of vulnerability. The proposed models leverage the structure of the input to have a better patch representation and they are based on RNNs, Transformers and CodeBERT [22], with the best performing model being the Transformer that surprisingly outperformed CodeBERT. The results show that it is possible to classify security patches but using more relevant pre-training techniques or tree-based representation of the code might improve the performance. / Användningen av programvara med öppen källkod blir alltmer populär, men det innebär en risk för att sårbarheter importeras från privata kodbaser. Säkerhetspatchar, som åtgärdar upptäckta sårbarheter, är viktiga för att skydda sig mot cyberattacker, och därför är det viktigt att kunna tillämpa alla säkerhetspatchar så snart de släpps. Även om det finns en offentlig databas för korrigeringar av sårbarheter förblir de flesta hemliga för allmänheten. Vi föreslår därför en maskininlärningsalgoritm som med hjälp av NLP upptäcker säkerhetspatchar i Java Open Source Software. För att träna modellen har vi förbehandlat och extraherat patchar från de commits som finns i två databaser, ena som tillhandahålls av Debricked och en annan offentlig databas som släppts av Ponta et al. [57]. Två experiment genomfördes, varav ett utförde binär klassificering och det andra försökte få en högre granularitet genom att klassificera makro-typen av sårbarheten. De föreslagna modellerna utnyttjar strukturen i indatat för att få en bättre representation av patcharna och de är baserade på RNNs, Transformers och CodeBERT [22], där den bäst presterande modellen var Transformer som överraskande nog överträffade CodeBERT. Resultaten visar att det är möjligt att klassificera säkerhetspatchar, men genom att använda mer relevanta förträningstekniker eller trädbaserade representationer av koden kan prestandan förbättras.
|
9 |
Two complementary approaches to detecting vulnerabilities in C programsJimenez, Willy 04 October 2013 (has links) (PDF)
In general, computer software vulnerabilities are defined as special cases where an unexpected behavior of the system leads to the degradation of security properties or the violation of security policies. These vulnerabilities can be exploited by malicious users or systems impacting the security and/or operation of the attacked system. Since the literature on vulnerabilities is not always available to developers and the used tools do not allow detecting and avoiding them; the software industry continues to be affected by security breaches. Therefore, the detection of vulnerabilities in software has become a major concern and research area. Our research was done under the scope of the SHIELDS European project and focuses specifically on modeling techniques and formal detection of vulnerabilities. In this area, existing approaches are limited and do not always rely on a precise formal modeling of the vulnerabilities they target. Additionally detection tools produce a significant number of false positives/negatives. Note also that it is quite difficult for a developer to know what vulnerabilities are detected by each tool because they are not well documented. Under this context the contributions made in this thesis are: Definition of a formalism called template. Definition of a formal language, called Vulnerability Detection Condition (VDC), which can accurately model the occurrence of a vulnerability. Also a method to generate VDCs from templates has been defined. Defining a second approach for detecting vulnerabilities which combines model checking and fault injection techniques. Experiments on both approaches
|
10 |
Fuzzing Radio Resource Control messages in 5G and LTE systems : To test telecommunication systems with ASN.1 grammar rules based adaptive fuzzer / Fuzzing Radio Resource Control-meddelanden i 5Goch LTE-systemPotnuru, Srinath January 2021 (has links)
5G telecommunication systems must be ultra-reliable to meet the needs of the next evolution in communication. The systems deployed must be thoroughly tested and must conform to their standards. Software and network protocols are commonly tested with techniques like fuzzing, penetration testing, code review, conformance testing. With fuzzing, testers can send crafted inputs to monitor the System Under Test (SUT) for a response. 3GPP, the standardization body for the telecom system, produces new versions of specifications as part of continuously evolving features and enhancements. This leads to many versions of specifications for a network protocol like Radio Resource Control (RRC), and testers need to constantly update the testing tools and the testing environment. In this work, it is shown that by using the generic nature of RRC specifications, which are given in Abstract Syntax Notation One (ASN.1) description language, one can design a testing tool to adapt to all versions of 3GPP specifications. This thesis work introduces an ASN.1 based adaptive fuzzer that can be used for testing RRC and other network protocols based on ASN.1 description language. The fuzzer extracts knowledge about ongoing RRC messages using protocol description files of RRC, i.e., RRC ASN.1 schema from 3GPP, and uses the knowledge to fuzz RRC messages. The adaptive fuzzer identifies individual fields, sub-messages, and custom data types according to specifications when mutating the content of existing messages. Furthermore, the adaptive fuzzer has identified a previously unidentified vulnerability in Evolved Packet Core (EPC) of srsLTE and openLTE, two open-source LTE implementations, confirming the applicability to robustness testing of RRC and other network protocols. / 5G-telekommunikationssystem måste vara extremt tillförlitliga för att möta behoven för den kommande utvecklingen inom kommunikation. Systemen som används måste testas noggrant och måste överensstämma med deras standarder. Programvara och nätverksprotokoll testas ofta med tekniker som fuzzing, penetrationstest, kodgranskning, testning av överensstämmelse. Med fuzzing kan testare skicka utformade input för att övervaka System Under Test (SUT) för ett svar. 3GPP, standardiseringsorganet för telekomsystemet, producerar ofta nya versioner av specifikationer för att möta kraven och bristerna från tidigare utgåvor. Detta leder till många versioner av specifikationer för ett nätverksprotokoll som Radio Resource Control (RRC) och testare behöver ständigt uppdatera testverktygen och testmiljön. I detta arbete visar vi att genom att använda den generiska karaktären av RRC-specifikationer, som ges i beskrivningsspråket Abstract Syntax Notation One (ASN.1), kan man designa ett testverktyg för att anpassa sig till alla versioner av 3GPP-specifikationer. Detta uppsatsarbete introducerar en ASN.1-baserad adaptiv fuzzer som kan användas för att testa RRC och andra nätverksprotokoll baserat på ASN.1- beskrivningsspråk. Fuzzer extraherar kunskap om pågående RRC meddelanden med användning av protokollbeskrivningsfiler för RRC, dvs RRC ASN.1 schema från 3GPP, och använder kunskapen för att fuzz RRC meddelanden. Den adaptiva fuzzer identifierar enskilda fält, delmeddelanden och anpassade datatyper enligt specifikationer när innehållet i befintliga meddelanden muteras. Dessutom har den adaptiva fuzzer identifierat en tidigare oidentifierad sårbarhet i Evolved Packet Core (EPC) för srsLTE och openLTE, två opensource LTE-implementeringar, vilket bekräftar tillämpligheten för robusthetsprovning av RRC och andra nätverksprotokoll.
|
Page generated in 0.0979 seconds