Spelling suggestions: "subject:"saint 2analysis"" "subject:"saint 3analysis""
1 |
Concurrent Interprocedural Dataflow AnalysisZou, Di January 2015 (has links)
Detecting bugs plays a significant role in software development. Bugs may lead to unexpected behaviors. An attacker can gain control over a system by exploiting its bugs. Usually, an attack can be triggered by user's input. Unchecked user input can cause serious problems in a program. In order to prevent this situation, user's input must be checked carefully before it can be used. To provide the information of where user's input can affect a program, the taint dataflow analysis is being considered.
In this thesis, we introduce a concurrent solution to perform static taint dataflow analysis. The goal is to find the statements of the program dependent on user input and inform the developers to validate those. We provides a method for the static concurrent taint dataflow analysis based on sequential static taint dataflow analysis.
Static dataflow analysis is time consuming. This research addresses the challenge of efficiently analyzing the dataflow. Our experimental shows that our concurrent taint dataflow analysis improves the speed of analyzing complex programs.
|
2 |
FUZZING DEEPER LOGIC WITH IMPEDING FUNCTION TRANSFORMATIONRowan Brock Hart (14205404) 02 December 2022 (has links)
<p>Fuzzing, a technique for negative testing of programs using randomly mutated or gen?erated input data, is responsible for the discovery of thousands of bugs in software from web browsers to video players. Advances in fuzzing focus on various methods for enhancing the number of bugs found and reducing the time spent to find them by applying various static, dynamic, and symbolic binary analysis techniques. As a stochastic process, fuzzing is an inherently inefficient method for discovering bugs residing in deep logic of programs due to the compounding complexity of preconditions as paths in programs grow in length. We propose a novel system to overcome this limitation by abstracting away path-constraining preconditions from a statement level to a function level by identifying impeding functions, functions that inhibit control flow from proceeding. REFACE is an end-to-end system for enhancing the capabilities of an existing fuzzer by generating variant binaries that present an easier-to-fuzz interface and expands an ongoing fuzzing campaign with minimal offline overhead. REFACE operates entirely on binary programs, requiring no source code or sym?bols to run, and is fuzzer-agnostic. This enhancement represents a step forward in a new direction toward abstraction of code that has historically presented a significant barrier to fuzzing and aims to make incremental progress by way of several ancillary dataflow analysis techniques with potential wide applicability. We attain a significant improvement in speed of obtaining maximum coverage, re-discover one known bug, and discover one possible new bug in a binary program during evaluation against an un-modified state-of-the-art fuzzer with no augmentation.</p>
|
3 |
Automatic Deobfuscation and Reverse Engineering of Obfuscated CodeYadegari, Babak January 2016 (has links)
Automatic malware analysis is an essential part of today's computer security practices. Nearly one million malware samples were delivered to the analysts on a daily basis on year 2014 alone while the number of samples submitted for analysis increases almost exponentially each year. Given the size of the threat we are facing today and the amount of malicious codes emerging every day, the ability to automatically analyze unknown and unwanted software is critically important more than ever. On the other hand, malware writers adapt their malicious codes to new security measurements to protect them from being exposed and detected. This is usually achieved by employing obfuscation techniques that complicate the reverse engineering and analysis of the code by adding lots of unnecessary and irrelevant computations. Most of the malicious samples found in the wild are obfuscated and equipped with complicated anti-analysis defenses intended to hide the malicious intent of the malware by defeating the analysis and/or increasing the analysis time. Deobfuscation (reversing the obfuscation) requires automatic techniques to extract the original logic embedded in the obfuscated code for further analysis. Presumably the deobfuscated code requires less analysis time and is easier to analyze compared to the obfuscated one. Previous approaches in this regard target specific types of obfuscations by making strong assumptions about the underlying protection scheme leaving opportunities for the adversaries to attack. This work addresses this limitation by proposing new program analysis techniques that are effective against code obfuscations while being generic by minimizing the assumptions about the underlying code. We found that standard program analysis techniques, including well-known data and control flow analyses and/or symbolic execution, suffer from imprecision due to the obfuscation and show how to mitigate this loss of precision. Using more precise program analysis techniques, we propose a deobfuscation technique that is successful in reversing the complex obfuscation techniques such as virtualization-obfuscation and/or Return-Oriented Programming (ROP).
|
4 |
A Formal Approach to Combining Prospective and Retrospective SecurityAmir-Mohammadian, Sepehr 01 January 2017 (has links)
The major goal of this dissertation is to enhance software security by provably correct enforcement of in-depth policies. In-depth security policies allude to heterogeneous specification of security strategies that are required to be followed before and after sensitive operations. Prospective security is the enforcement of security, or detection of security violations before the execution of sensitive operations, e.g., in authorization, authentication and information flow. Retrospective security refers to security checks after the execution of sensitive operations, which is accomplished through accountability and deterrence. Retrospective security frameworks are built upon auditing in order to provide sufficient evidence to hold users accountable for their actions and potentially support other remediation actions. Correctness and efficiency of audit logs play significant roles in reaching the accountability goals that are required by retrospective, and consequently, in-depth security policies. This dissertation addresses correct audit logging in a formal framework.
Leveraging retrospective controls beside the existing prospective measures enhances security in numerous applications. This dissertation focuses on two major application spaces for in-depth enforcement. The first is to enhance prospective security through surveillance and accountability. For example, authorization mechanisms could be improved by guaranteed retrospective checks in environments where there is a high cost of access denial, e.g., healthcare systems. The second application space is the amelioration of potentially flawed prospective measures through retrospective checks. For instance, erroneous implementations of input sanitization methods expose vulnerabilities in taint analysis tools that enforce direct flow of data integrity policies. In this regard, we propose an in-depth enforcement framework to mitigate such problems. We also propose a general semantic notion of explicit flow of information integrity in a high-level language with sanitization.
This dissertation studies the ways by which prospective and retrospective security could be enforced uniformly in a provably correct manner to handle security challenges in legacy systems. Provable correctness of our results relies on the formal Programming Languages-based approach that we have taken in order to provide software security assurance. Moreover, this dissertation includes the implementation of such in-depth enforcement mechanisms for a medical records web application.
|
5 |
Development of a prototype taint tracing tool for security and other purposesKargén, Ulf January 2012 (has links)
In recent years there has been an increasing interest in dynamic taint tracing of compiled software as a powerful analysis method for security and other purposes. Most existing approaches are highly application specific and tends to sacrifice precision in favor of performance. In this thesis project a generic taint tracing tool has been developed that can deliver high precision taint information. By allowing an arbitrary number of taint labels to be stored for every tainted byte, accurate taint propagation can be achieved for values that are derived from multiple input bytes. The tool has been developed for x86 Linux systems using the dynamic binary instrumentation framework Valgrind. The basic theory of taint tracing and multi-label taint propagation is discussed, as well as the main concepts of implementing a taint tracing tool using dynamic binary instrumentation. The impact of multi-label taint propagation on performance and precision is evaluated. While multi-label taint propagation has a considerable impact on performance, experiments carried out using the tool show that large amounts of taint information is lost with approximate methods using only one label per tainted byte.
|
6 |
A Security Related and Evidence-Based Holistic Ranking and Composition Framework for Distributed ServicesChowdhury, Nahida Sultana 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The number of smart mobile devices has grown at a significant rate in recent years. This growth has resulted in an exponential number of publicly available mobile Apps. To help the selection of suitable Apps, from various offered choices, the App distribution platforms generally rank/recommend Apps based on average star ratings, the number of installs, and associated reviews ― all the external factors of an App. However, these ranking schemes typically tend to ignore critical internal factors (e.g., bugs, security vulnerabilities, and data leaks) of the Apps. The AppStores need to incorporate a holistic methodology that includes internal and external factors to assign a level of trust to Apps. The inclusion of the internal factors will describe associated potential security risks. This issue is even more crucial with newly available Apps, for which either user reviews are sparse, or the number of installs is still insignificant. In such a scenario, users may fail to estimate the potential risks associated with installing Apps that exist in an AppStore.
This dissertation proposes a security-related and evidence-based ranking framework, called SERS (Security-related and Evidence-based Ranking Scheme) to compare similar Apps. The trust associated with an App is calculated using both internal and external factors (i.e., security flaws and user reviews) following an evidence-based approach and applying subjective logic principles. The SERS is formalized and further enhanced in the second part of this dissertation, resulting in its enhanced version, called as E-SERS (Enhanced SERS). These enhancements include an ability to integrate any number of sources that can generate evidence for an App and consider the temporal aspect and reputation of evidence sources. Both SERS and E-SERS are evaluated using publicly accessible Apps from the Google PlayStore and the rankings generated by them are compared with prevalent ranking techniques such as the average star ratings and the Google PlayStore Rankings. The experimental results indicate that E-SERS provides a comprehensive and holistic view of an App when compared with prevalent alternatives. E-SERS is also successful in identifying malicious Apps where other ranking schemes failed to address such vulnerabilities.
In the third part of this dissertation, the E-SERS framework is used to propose a trust-aware composition model at two different granularities. This model uses the trust score computed by E-SERS, along with the probability of an App belonging to the malicious category, as the desired attributes for selecting a composition as the two granularities. Finally, the trust-aware composition model is evaluated with the average star rating parameter and the trust score.
A holistic approach, as proposed by E-SERS, to computer a trust score will benefit all kinds of Apps including newly published Apps that follow proper security measures but initially struggle in the AppStore rankings due to a lack of a large number of reviews and ratings. Hence, E-SERS will be helpful both to the developers and users. In addition, the composition model that uses such a holistic trust score will enable system integrators to create trust-aware distributed systems for their specific needs.
|
7 |
Dasty : Revealing Real-World Prototype Pollution Consequences with Dynamic Taint Analysis / Dasty : Exponera Verkliga Konsekvenser av Prototype Pollution med Hjälp av Dynamic Taint AnalysisMoosbrugger, Paul January 2023 (has links)
Prototype pollution is a vulnerability in JavaScript and other prototype-based languages that allows malicious actors to inject a property into an object’s prototype. The injected property can subsequently trigger gadgets - source code sections that use the properties in sensitive locations. Gadgets can lead to various exploits, including denial-of-service, data exfiltration, and arbitrary code execution (ACE). Current research focuses primarily on the detection of pollution, while only a few discuss gadget detection. Those that do either propose detection solutions for browser-side applications or selected frameworks. This thesis aims to answer how prototype pollution affects modern server-side applications built on the Node.js framework. We propose a system that can automatically detect potential prototype pollution gadgets in Node.js applications. We utilize dynamic taint tracking to find flows from polluted prototypes to exploitable functions. Our system consists of multiple distinct runs. A first run analyzes a program without changing the control-flow to avoid premature termination through exceptions and program crashes. In subsequent runs, the system selectively changes conditionals to increase coverage. Based on our methodology, we implement Dasty, a performant dynamic taint analysis for prototype pollution gadgets built on NodeProf and the Truffle Instrumentation Framework. Dasty can automatically analyze third-party packages by utilizing their test suites. We use our implementation to analyze the 5000 most depended upon npm packages and verify the resulting flows systematically, focusing on ACE and similar high-profile vulnerabilities. Through the analysis, we identify 16 new gadgets in packages used by thousands of applications. Our results suggest that prototype pollution can lead to serious security issues in many modern applications. / Prototype pollution är en sårbarhet i JavaScript och andra prototypbaserade språk som tillåter skadliga aktörer att injicera en egenskap i ett objekts prototype. Den prototype som blivit komprometterad kan därefter utlösa gadgets - delar av kod som använder egenskaperna på känsliga positioner. Gadgets kan leda till olika exploiteringar, inklusive denial-of-service, dataexfiltrering och arbitrary code execution (ACE). Aktuell forskning fokuserar främst på detektion av prototype pollution, medan endast ett fåtal diskuterar detektion av gadgets. De som gör det föreslår antingen detekteringslösningar för applikationer på webbläsarnivå eller enskilda ramverk. Detta examensarbete syftar till att svara på hur prototype pollution påverkar moderna applikationer på serversidan byggda med ramverket Node.js. Vi föreslår ett system som automatiskt kan upptäcka potentiella prototype pollution gadgets i Node.js-applikationer. Vi använder dynamic taint tracking för att hitta flöden från injicerade prototyper till exploateringsbara funktioner. Vårt system består av flera distinkta körningar. En första körning analyserar ett program utan att ändra kontrollflödet för att undvika för tidig terminering p.g.a. exceptions och programkrascher. I efterföljande körningar ändrar systemet selektivt villkoren för att öka täckningen. Baserat på vår metodik implementerar vi Dasty, en snabb dynamic taint analysis för prototype pollution gadgets byggda på NodeProf och Truffle Instrumentation Framework. Dasty kan automatiskt analysera tredjepartspaket genom att använda deras testramverk. Vi använder vår implementering för att analysera de 5000 mest npm-beroende paketen och verifiera de resulterande flödena systematiskt, med fokus på ACE och liknande högprofilerade sårbarheter. Genom analysen identifierar vi 16 nya gadgets i paket som används av tusentals applikationer. Våra resultat tyder på att prototype pollution kan leda till allvarliga säkerhetsproblem i många moderna applikationer.
|
8 |
Detecting Information Leakage in Android Malware Using Static Taint AnalysisKelkar, Soham P. January 2017 (has links)
No description available.
|
9 |
Analyse du flot de contrôle multivariante : application à la détection de comportements des programmes / Multivariant control flow analysis : application to behavior detection in programsLaouadi, Rabah 14 December 2016 (has links)
Sans exécuter une application, est-il possible de prévoir quelle est la méthode cible d’un site d’appel ? Est-il possible de savoir quels sont les types et les valeurs qu’une expression peut contenir ? Est-il possible de déterminer de manière exhaustive l’ensemble de comportements qu’une application peut effectuer ? Dans les trois cas, la réponse est oui, à condition d’accepter une certaine approximation. Il existe une classe d’algorithmes − peu connus à l’extérieur du cercle académique − qui analysent et simulent un programme pour calculer de manière conservatrice l’ensemble des informations qui peuvent être véhiculées dans une expression.Dans cette thèse, nous présentons ces algorithmes appelés CFAs (acronyme de Control Flow Analysis), plus précisément l’algorithme multivariant k-l-CFA. Nous combinons l’algorithme k-l-CFA avec l’analyse de taches (taint analysis),qui consiste à suivre une donnée sensible dans le flot de contrôle, afin de déterminer si elle atteint un puits (un flot sortant du programme). Cet algorithme, en combinaison avec l’interprétation abstraite pour les valeurs, a pour objectif de calculer de manière aussi exhaustive que possible l’ensemble des comportements d’une application. L’un des problèmes de cette approche est le nombre élevé de faux-positifs, qui impose un post-traitement humain. Il est donc essentiel de pouvoir augmenter la précision de l’analyse en augmentant k.k-l-CFA est notoirement connu comme étant très combinatoire, sa complexité étant exponentielle dans la valeur de k. La première contribution de cette thèse est de concevoir un modèle et une implémentation la plus efficace possible, en séparant soigneusement les parties statiques et dynamiques de l’analyse, pour permettre le passage à l’échelle. La seconde contribution de cette thèse est de proposer une nouvelle variante de CFA basée sur k-l-CFA, et appelée *-CFA, qui consiste à faire du paramètre k une propriété de chaque variante, de façon à ne l’augmenter que dans les contextes qui le justifient.Afin d’évaluer l’efficacité de notre implémentation de k-l-CFA, nous avons effectué une comparaison avec le framework Wala. Ensuite, nous validons l’analyse de taches et la détection de comportements avec le Benchmark DroidBench. Enfin, nous présentons les apports de l’algorithme *-CFA par rapport aux algorithmes standards de CFA dans le contexte d’analyse de taches et de détection de comportements. / Without executing an application, is it possible to predict the target method of a call site? Is it possible to know the types and values that an expression can contain? Is it possible to determine exhaustively the set of behaviors that an application can perform? In all three cases, the answer is yes, as long as a certain approximation is accepted.There is a class of algorithms - little known outside of academia - that can simulate and analyze a program to compute conservatively all information that can be conveyed in an expression. In this thesis, we present these algorithms called CFAs (Control flow analysis), and more specifically the multivariant k-l-CFA algorithm.We combine k-l-CFA algorithm with taint analysis, which consists in following tainted sensitive data inthe control flow to determine if it reaches a sink (an outgoing flow of the program).This combination with the integration of abstract interpretation for the values, aims to identify asexhaustively as possible all behaviors performed by an application.The problem with this approach is the high number of false positives, which requiresa human post-processing treatment.It is therefore essential to increase the accuracy of the analysis by increasing k.k-l-CFA is notoriously known as having a high combinatorial complexity, which is exponential commensurately with the value of k.The first contribution of this thesis is to design a model and most efficient implementationpossible, carefully separating the static and dynamic parts of the analysis, to allow scalability.The second contribution of this thesis is to propose a new CFA variant based on k-l-CFA algorithm -called *-CFA - , which consists in keeping locally for each variant the parameter k, and increasing this parameter in the contexts which justifies it.To evaluate the effectiveness of our implementation of k-l-CFA, we make a comparison with the Wala framework.Then, we do the same with the DroidBench benchmark to validate out taint analysis and behavior detection. Finally , we present the contributions of *-CFA algorithm compared to standard CFA algorithms in the context of taint analysis and behavior detection.
|
10 |
Recherche de vulnérabilités logicielles par combinaison d'analyses de code binaire et de frelatage (Fuzzing) / Software vulnerability research combining fuzz testing and binary code analysisBekrar, Sofia 10 October 2013 (has links)
Le frelatage (ou fuzzing) est l'une des approches les plus efficaces pour la détection de vulnérabilités dans les logiciels de tailles importantes et dont le code source n'est pas disponible. Malgré une utilisation très répandue dans l'industrie, les techniques de frelatage "classique" peuvent avoir des résultats assez limités, et pas toujours probants. Ceci est dû notamment à une faible couverture des programmes testés, ce qui entraîne une augmentation du nombre de faux-négatifs; et un manque de connaissances sur le fonctionnement interne de la cible, ce qui limite la qualité des entrées générées. Nous présentons dans ce travail une approche automatique de recherche de vulnérabilités logicielles par des processus de test combinant analyses avancées de code binaire et frelatage. Cette approche comprend : une technique de minimisation de suite de tests, pour optimiser le rapport entre la quantité de code testé et le temps d'exécution ; une technique d'analyse de couverture optimisée et rapide, pour évaluer l'efficacité du frelatage ; une technique d'analyse statique, pour localiser les séquences de codes potentiellement sensibles par rapport à des patrons de vulnérabilités; une technique dynamique d'analyse de teinte, pour identifier avec précision les zones de l'entrée qui peuvent être à l'origine de déclenchements de vulnérabilités; et finalement une technique évolutionniste de génération de test qui s'appuie sur les résultats des autres analyses, afin d'affiner les critères de décision et d'améliorer la qualité des entrées produites. Cette approche a été mise en œuvre à travers une chaîne d'outils intégrés et évalués sur de nombreuses études de cas fournies par l'entreprise. Les résultats obtenus montrent son efficacité dans la détection automatique de vulnérabilités affectant des applications majeures et sans accès au code source. / Fuzz testing (a.k.a. fuzzing) is one of the most effective approaches for discovering security vulnerabilities in large and closed-source software. Despite their wide use in the software industry, traditional fuzzing techniques suffer from a poor coverage, which results in a large number of false negatives. The other common drawback is the lack of knowledge about the application internals. This limits their ability to generate high quality inputs. Thus such techniques have limited fault detection capabilities. We present an automated smart fuzzing approach which combines advanced binary code analysis techniques. Our approach has five components. A test suite reduction technique, to optimize the ratio between the amount of covered code and the execution time. A fast and optimized code coverage measurement technique, to evaluate the fuzzing effectiveness. A static analysis technique, to locate potentially sensitive sequences of code with respect to vulnerability patterns. An origin-aware dynamic taint analysis technique, to precisely identify the input fields that may trigger potential vulnerabilities. Finally, an evolutionary based test generation technique, to produce relevant inputs. We implemented our approach as an integrated tool chain, and we evaluated it on numerous industrial case studies. The obtained results demonstrate its effectiveness in automatically discovering zero-day vulnerabilities in major closed-source applications. Our approach is relevant to both defensive and offensive security purposes.
|
Page generated in 0.0636 seconds